skip to main content


Search for: All records

Creators/Authors contains: "Park, Jinkyung"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the smart home landscape, there is an increasing trend of homeowners sharing device access outside their homes. This practice presents unique challenges in terms of security and privacy. In this study, we evaluated the co-management features in smart home management systems to investigate 1) how homeowners establish and authenticate shared users’ access, 2) the access control mechanisms, and 3) the management, monitoring, and revocation of access for shared devices. We conducted a systematic feature analysis of 11 Android and iOS mobile applications (“apps”) and 2 open-source platforms designed for smart home management. Our study revealed that most smart home systems adopt a centralized control model which necessitates shared users to utilize the primary app for device access, while providing diverse sharing mechanisms, such as email or phone invitations and unique codes, each presenting distinct security and privacy advantages. Moreover, we discovered a variety of access control options, ranging from full access to granular access control such as time-based restrictions which, while enhancing security and convenience, necessitate careful management to avoid user confusion. Additionally, our findings highlighted the prevalence of comprehensive methods for monitoring shared users’ access, with most systems providing detailed logs for added transparency and security, although there are some restrictions to safeguard homeowner privacy. Based on our findings, we recommend enhanced access control features to improve user experience in shared settings. 
    more » « less
    Free, publicly-accessible full text available November 1, 2025
  2. The debate on whether social media has a net positive or negative effect on youth is ongoing. Therefore, we conducted a thematic analysis on 2,061 posts made by 1,038 adolescents aged 15-17 on an online peer-support platform to investigate the ways in which these teens discussed popular social media platforms in their posts and to identify differences in their experiences across platforms. Our findings revealed four main emergent themes for the ways in which social media was discussed: 1) Sharing negative experiences or outcomes of social media use (58%, n = 1,095), 2) Attempts to connect with others (45%, n = 922), 3) Highlighting the positive side of social media use (20%, n = 409), and 4) Seeking information (20%, n = 491). Overall, while sharing about negative experiences was more prominent, teens also discussed balanced perspectives of connection-seeking, positive experiences, and information support on social media that should not be discounted. Moreover, we found statistical significance for how these experiences differed across social media platforms. For instance, teens were most likely to seek romantic relationships on Snapchat and self-promote on YouTube. Meanwhile, Instagram was mentioned most frequently for body shaming, and Facebook was the most commonly discussed platform for privacy violations (mostly from parents). The key takeaway from our study is that the benefits and drawbacks of teens' social media usage can co-exist and net effects (positive or negative) can vary across different teens across various contexts. As such, we advocate for mitigating the negative experiences and outcomes of social media use as voiced by teens, to improve, rather than limit or restrict, their overall social media experience. We do this by taking an affordance perspective that aims to promote the digital well-being and online safety of youth by design.

     
    more » « less
    Free, publicly-accessible full text available November 7, 2025
  3. Free, publicly-accessible full text available June 17, 2025
  4. Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for “real-time” risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. Real-time detection was mainly operationalized as “early” detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms’ improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks. 
    more » « less
    Free, publicly-accessible full text available May 11, 2025
  5. Free, publicly-accessible full text available May 2, 2025
  6. Traditional online safety technologies often overly restrict teens and invade their privacy, while parents often lack knowledge regarding their digital privacy. As such, prior researchers have called for more collaborative approaches on adolescent online safety and networked privacy. In this paper, we propose family-centered approaches to foster parent-teen collaboration in ensuring their mobile privacy and online safety while respecting individual privacy, to enhance open discussion and teens' self-regulation. However, challenges such as power imbalances and conflicts with family values arise when implementing such approaches, making parent-teen collaboration difficult. Therefore, attending the family-centered design workshop provided an invaluable opportunity for us to discuss these challenges and identify best research practices for the future of collaborative online safety and privacy within families. 
    more » « less
  7. Adolescent online safety research has largely focused on designing interventions for teens, with few evaluations that provide effective online safety solutions. It is challenging to evaluate such solutions without simulating an environment that mimics teens online risks. To overcome this gap, we conducted focus groups with 14 teens to co-design realistic online risk scenarios and their associated user personas, which can be implemented for an ecologically valid evaluation of interventions. We found that teens considered the characteristics of the risky user to be important and designed personas to have traits that align with the risk type, were more believable and authentic, and attracted teens through materialistic content. Teens also redesigned the risky scenarios to be subtle in information breaching, harsher in cyberbullying, and convincing in tricking the teen. Overall, this work provides an in-depth understanding of the types of bad actors and risky scenarios teens design for realistic research experimentation. 
    more » « less
  8. As part of a Youth Advisory Board of teens (YAB), a longitudinal and interactive program to engage with teens for adolescent online safety research, we used an Asynchronous Remote Community (ARC) method with seven teens to explore their social media usage and perspectives on privacy on social media. There was a spectrum of privacy levels in our teen participants’ preferred social media platforms and preferences varied depending on their user goals such as content viewing and socializing. They recognized privacy risks they could encounter on social media, hence, actively used privacy features afforded by platforms to stay safe while meeting their goals. In addition, our teen participants designed solutions that can aid users to exercise more granular control over determining what information on their accounts is to be shared with which groups of users. Our findings highlight the need to ensure researchers and social media developers work with teens to provide teen-centric solutions for safer experiences on social media. 
    more » « less
  9. With the growing ubiquity of the Internet and access to media-based social media platforms, the risks associated with media content sharing on social media and the need for safety measures against such risks have grown paramount. At the same time, risk is highly contextualized, especially when it comes to media content youth share privately on social media. In this work, we conducted qualitative content analyses on risky media content flagged by youth participants and research assistants of similar ages to explore contextual dimensions of youth online risks. The contextual risk dimensions were then used to inform semi- and self-supervised state-of-the-art vision transformers to automate the process of identifying risky images shared by youth. We found that vision transformers are capable of learning complex image features for use in automated risk detection and classification. The results of our study serve as a foundation for designing contextualized and youth-centered machine-learning methods for automated online risk detection. 
    more » « less
  10. Multiple recent efforts have used large-scale data and computational models to automatically detect misinformation in online news articles. Given the potential impact of misinformation on democracy, many of these efforts have also used the political ideology of these articles to better model misinformation and study political bias in such algorithms. However, almost all such efforts have used source level labels for credibility and political alignment, thereby assigning the same credibility and political alignment label to all articles from the same source (e.g., the New York Times or Breitbart). Here, we report on the impact of journalistic best practices to label individual news articles for their credibility and political alignment. We found that while source level labels are decent proxies for political alignment labeling, they are very poor proxies-almost the same as flipping a coin-for credibility ratings. Next, we study the implications of such source level labeling on downstream processes such as the development of automated misinformation detection algorithms and political fairness audits therein. We find that the automated misinformation detection and fairness algorithms can be suitably revised to support their intended goals but might require different assumptions and methods than those which are appropriate using source level labeling. The results suggest caution in generalizing recent results on misinformation detection and political bias therein. On a positive note, this work shares a new dataset of journalistic quality individually labeled articles and an approach for misinformation detection and fairness audits. 
    more » « less