skip to main content


Title: Nudge-nudge, WNK-WNK (kinases), say no more?
Award ID(s):
1713880
PAR ID:
10061961
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
New Phytologist
Volume:
220
Issue:
1
ISSN:
0028-646X
Page Range / eLocation ID:
p. 35-48
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
  2. There is growing concern regarding adolescent online risks posed by social media. Prior work calls for a paradigm shift from restrictive approaches towards strength-based solutions to online safety, that provide autonomy and control to teens. To better understand how we might design online safety interventions that help teens deal with online risks, we must include teens as partners in the design and evaluation of online safety solutions. To address this gap, my first dissertation study focused on co-designing online safety features with teens, which showed that teens often design real-time interventions that resemble "nudges". Therefore, my dissertation focuses on evaluating the effectiveness of these nudge designs in an ecologically valid social media simulation. To do this, I will conduct three studies: 1) a User Experience Bootcamp with teens to teach them design skills for co-designing online safety features, 2) a focus group study to design an ecologically valid social media simulation, 3) a between-subjects experiment within a social media simulation for evaluating the effect of nudges in educating teens and helping them make safer choices when exposed to risk. My goal for this research is to understand, design, develop, and evaluate online safety nudges that can help promote self-regulated, autonomous, and safer interactions for teens online. 
    more » « less
  3. In this position paper, we propose the use of existing XAI frameworks to design interventions in scenarios where algorithms expose users to problematic content (e.g. anti vaccine videos). Our intervention design includes facts (to indicate algorithmic justification of what happened) accompanied with either fore warnings or counterfactual explanations. While fore warnings indicate potential risks of an action to users, the counterfactual explanations will indicate what actions user should perform to change the algorithmic outcome. We envision the use of such interventions as `decision aids' to users which will help them make informed choices. 
    more » « less