Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 11, 2025
-
Activity tracking has the potential to promote active lifestyles among older adults. However, current activity tracking technologies may inadvertently perpetuate ageism by focusing on age-related health risks. Advocating for a personalized approach in activity tracking technology, we sought to understand what activities older adults find meaningful to track and the underlying values of those activities. We conducted a reflective interview study following a 7-day activity journaling with 13 participants. We identified various underlying values motivating participants to track activities they deemed meaningful. These values, whether competing or aligned, shape the desirability of activities. Older adults appreciate low-exertion activities, but they are difficult to track. We discuss how these activities can become central in designing activity tracking systems. Our research offers insights for creating value-driven, personalized activity trackers that resonate more fully with the meaningful activities of older adults.more » « lessFree, publicly-accessible full text available May 11, 2025
-
The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. DeepFuse helps CNN engineers to systemically search unreasonable local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using DeepFuse, participants made a more accurate and reasonable model than the current state-of-the-art. Also, participants found the way DeepFuse guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.
-
Speech as a natural and low-burden input modality has great potential to support personal data capture. However, little is known about how people use speech input, together with traditional touch input, to capture different types of data in self-tracking contexts. In this work, we designed and developed NoteWordy, a multimodal self-tracking application integrating touch and speech input, and deployed it in the context of productivity tracking for two weeks (N = 17). Our participants used the two input modalities differently, depending on the data type as well as personal preferences, error tolerance for speech recognition issues, and social surroundings. Additionally, we found speech input reduced participants' diary entry time and enhanced the data richness of the free-form text. Drawing from the findings, we discuss opportunities for supporting efficient personal data capture with multimodal input and implications for improving the user experience with natural language input to capture various self-tracking data.more » « less
-
Abstract Among individuals with psychotic disorders, paranoid ideation is common and associated with increased impairment, decreased quality of life, and a more pessimistic prognosis. Although accumulating research indicates negative affect is a key precipitant of paranoid ideation, the possible protective role of positive affect has not been examined. Further, despite the interpersonal nature of paranoid ideation, there are limited and inconsistent findings regarding how social context, perceptions, and motivation influence paranoid ideation in real-world contexts. In this pilot study, we used smartphone ecological momentary assessment to understand the relevance of hour-by-hour fluctuations in mood and social experience for paranoid ideation in adults with psychotic disorders. Multilevel modeling results indicated that greater negative affect is associated with higher concurrent levels of paranoid ideation and that it is marginally related to elevated levels of future paranoid ideation. In contrast, positive affect was unrelated to momentary experiences of paranoid ideation. More severe momentary paranoid ideation was also associated with an elevated desire to withdraw from social encounters, irrespective of when with familiar or unfamiliar others. These observations underscore the role of negative affect in promoting paranoid ideation and highlight the contribution of paranoid ideation to the motivation to socially withdraw in psychotic disorders.
-
Current activity tracking technologies are largely trained on younger adults’ data, which can lead to solutions that are not well-suited for older adults. To build activity trackers for older adults, it is crucial to collect training data with them. To this end, we examine the feasibility and challenges with older adults in collecting activity labels by leveraging speech. Specifically, we built MyMove, a speech-based smartwatch app to facilitate the in-situ labeling with a low capture burden. We conducted a 7-day deployment study, where 13 older adults collected their activity labels and smartwatch sensor data, while wearing a thigh-worn activity monitor. Participants were highly engaged, capturing 1,224 verbal reports in total. We extracted 1,885 activities with corresponding effort level and timespan, and examined the usefulness of these reports as activity labels. We discuss the implications of our approach and the collected dataset in supporting older adults through personalized activity tracking technologies.more » « less
-
The factors influencing people’s food decisions, such as one’s mood and eating environment, are important information to foster self-reflection and to develop personalized healthy diet. But, it is difficult to consistently collect them due to the heavy data capture burden. In this work, we examine how speech input supports capturing everyday food practice through a week-long data collection study (N = 11). We deployed FoodScrap, a speech-based food journaling app that allows people to capture food components, preparation methods, and food decisions. Using speech input, participants detailed their meal ingredients and elaborated their food decisions by describing the eating moments, explaining their eating strategy, and assessing their food practice. Participants recognized that speech input facilitated self-reflection, but expressed concerns around re-recording, mental load, social constraints, and privacy. We discuss how speech input can support low-burden and reflective food journaling and opportunities for effectively processing and presenting large amounts of speech data.more » « less
-
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.more » « less