skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Self-E: Smartphone-Supported Guidance for Customizable Self-Experimentation
The ubiquity of self-tracking devices and smartphone apps has empowered people to collect data about themselves and try to self-improve. However, people with little to no personal analytics experience may not be able to analyze data or run experiments on their own (self-experiments). To lower the barrier to intervention-based self-experimentation, we developed an app called Self-E, which guides users through the experiment. We conducted a 2-week diary study with 16 participants from the local population and a second study with a more advanced group of users to investigate how they perceive and carry out self-experiments with the help of Self-E, and what challenges they face. We find that users are influenced by their preconceived notions of how healthy a given behavior is, making it difficult to follow Self-E’s directions and trusting its results. We present suggestions to overcome this challenge, such as by incorporating empathy and scaffolding in the system.  more » « less
Award ID(s):
1656763
PAR ID:
10334575
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
ACM CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. People often face barriers to selecting self-tracking tools that support their goals and needs, resulting in tools not meeting their expectations and ultimately abandonment. We therefore examine how people approach selecting self-tracking apps and investigate how technology can better support the process. Drawing on past literature on how people select and perceive the features of commercial and research tracking tools, we surface seven attributes people consider during selection, and design a low-fidelity prototype of an app store that highlights these attributes. We then conduct semi-structured interviews with 18 participants to further investigate what people consider during selection, how people select self-tracking apps, and how surfacing tracking-related attributes could better support selection. We find that people often prioritize features related to self-tracking during selection, such as approaches to collecting and reflecting on data, and trial apps to determine whether they would suit their needs. Our results also show potential for technology surfacing how apps support tracking to reduce barriers to selection. We discuss future opportunities for improving self-tracking app selection, such as ways to enhance existing self-tracking app distribution platforms to enable people to filter and search apps by desirable features. 
    more » « less
  2. People go online for information and support about sensitive topics like depression, infertility, death, or divorce. However, what happens when such topics are algorithmically recommended to them even if they are not looking for it? This article examines people's self-diagnostic behaviors based on algorithmically-recommended content, for example, wondering if they might have depression because an algorithm pushed that topic into their view. Specifically, it examines what happens when the sensitive content is not generated by users, but by companies in the form of targeted advertisements. This paper explores these questions in three parts. The first part reviews literature on self-diagnosis and targeted advertising. The second part presents a mixed-methods study of how targeted ads can enable self-diagnostic reactions. The third part reflects on the mechanisms that influence self-diagnosis and examines potential regulatory implications. 
    more » « less
  3. AI language technologies increasingly assist and expand human communication. While AI-mediated communication reduces human effort, its societal consequences are poorly understood. In this study, we investigate whether using an AI writing assistant in personal self-presentation changes how people talk about themselves. In an online experiment, we asked participants (N=200) to introduce themselves to others. An AI language assistant supported their writing by suggesting sentence completions. The language model generating suggestions was fine-tuned to preferably suggest either interest, work, or hospitality topics. We evaluate how the topic preference of a language model affected users’ topic choice by analyzing the topics participants discussed in their self-presentations. Our results suggest that AI language technologies may change the topics their users talk about. We discuss the need for a careful debate and evaluation of the topic priors built into AI language technologies. 
    more » « less
  4. Recent advances in generative models have made it increasingly difficult to distinguish real data from model-generated synthetic data. Using synthetic data for successive training of future model generations creates “self-consuming loops,” which may lead to model collapse or training instability. Furthermore, synthetic data is often subject to human feedback and curated by users based on their preferences. Ferbach et al. (2024) recently showed that when data is curated according to user preferences, the self-consuming retraining loop drives the model to converge toward a distribution that optimizes those preferences. However, in practice, data curation is often noisy or adversarially manipulated. For example, competing platforms may recruit malicious users to adversarially curate data and disrupt rival models. In this paper, we study how generative models evolve under self-consuming retraining loops with noisy and adversarially curated data. We theoretically analyze the impact of such noisy data curation on generative models and identify conditions for the robustness and stability of the retraining process. Building on this analysis, we design attack algorithms for competitive adversarial scenarios, where a platform with a limited budget employs malicious users to misalign a rival’s model from actual user preferences. Experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed algorithms. 
    more » « less
  5. How do people assess the likelihood of personal risk in online activity? In three pilot experiments and one preregistered experiment, we tested the motivational and cognitive mechanisms that shape self and social judgments of cyber security. In Pilot Studies 1–3, we probed for evidence of differential use of base rate information in forecasting the likelihood oneself or another person would engage in a risky behavior. In the preregistered experiment, we gathered direct evidence of differential use of base rate information through covert eye-tracking. Data suggest people self-enhance when assessing risk, believing they are less likely than others to engage in actions that pose a threat to their cyber security, particularly because they rely less on base rate information when predicting their own behavior compared to others’ behavior. Self and social judgments were not different when scenarios posed no risk. We discuss implications for self-insight and interventions to curb risky behavior in online activity. 
    more » « less