Whether investigating research questions or designing systems, many researchers and designers need to engage users with their personal data. However, it is difficult to successfully design user-facing tools for interacting with personal data without first understanding what users want to do with their data. Techniques for raw data exploration, sketching, or physicalization can avoid the perils of tool development, but prevent direct analytical access to users' rich personal data. We present a new method that directly tackles this challenge: the data engagement interview. This interview method incorporates an analyst to provide real-time personal data analysis, granting interview participants the opportunity to directly engage with their data, and interviewers to observe and ask questions throughout this engagement. We describe the method's development through a case study with asthmatic participants, share insights and guidance from our experience, and report a broad set of insights from these interviews.
more »
« less
Personal Time-Lapse
Our bodies are constantly in motion—from the bending of arms and legs to the less conscious movement of breathing, our precise shape and location change constantly. This can make subtler developments (e.g., the growth of hair, or the healing of a wound) difficult to observe. Our work focuses on helping users record and visualize this type of subtle, longer-term change. We present a mobile tool that combines custom 3D tracking with interactive visual feedback and computational imaging to capture personal time-lapse, which approximates longer-term video of the subject (typically, part of the capturing user’s body) under a fixed viewpoint, body pose, and lighting condition. These personal time-lapses offer a powerful and detailed way to track visual changes of the subject over time. We begin with a formative study that examines what makes personal time-lapse so difficult to capture. Building on our findings, we motivate the design of our capture tool, evaluate this design with users, and demonstrate its effectiveness in a variety of challenging examples.
more »
« less
- Award ID(s):
- 2340448
- PAR ID:
- 10594566
- Publisher / Repository:
- ACM UIST 2024
- Date Published:
- ISBN:
- 9798400706288
- Page Range / eLocation ID:
- 1 to 13
- Subject(s) / Keyword(s):
- Augmented Reality Human-Computer Interaction HCI Mobile Application 3D Tracking Computational Photography Remote Healthcare Telemedicine
- Format(s):
- Medium: X
- Location:
- Pittsburgh PA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.more » « less
-
We propose a new time-dependent predictive model of user-item ratings centered around local coherence -- that is, while both users and items are constantly in flux, within a short-term sequence, the neighborhood of a particular user or item is likely to be coherent. Three unique characteristics of the framework are: (i) it incorporates both implicit and explicit feedbacks by extracting the local coherence hidden in the feedback sequences; (ii) it uses parallel recurrent neural networks to capture the evolution of users and items, resulting in a dual factor recommendation model; and (iii) it combines both coherence-enhanced consistent latent factors and dynamic latent factors to balance short-term changes with long-term trends for improved recommendation. Through experiments on Goodreads and Amazon, we find that the proposed model can outperform state-of-the-art models in predicting users' preferences.more » « less
-
Science and technology journalists today face challenges in finding newsworthy leads due to increased workloads, reduced resources, and expanding scientific publishing ecosystems. Given this context, we explore computational methods to aid these journalists' news discovery in terms of their agency and time-efficiency. We prototyped three computational information subsidies into an interactive tool that we used as a probe to better understand how such a tool may offer utility or more broadly shape the practices of professional science journalists. Our findings highlight central considerations around science journalists' user agency, contexts of use, and professional responsibility that such tools can influence and could account for in design. Based on this, we suggest design opportunities for enhancing and extending user agency over the longer-term; incorporating contextual, personal and collaborative notions of newsworthiness; and leveraging flexible interfaces and generative models. Overall, our findings contribute a richer view of the sociotechnical system around computational news discovery tools, and suggest ways to improve such tools to better support the practices of science journalists.more » « less
-
This paper explores the feasibility of using sonification in delivering and communicating health and wellness status on personal devices. Ambient displays have proven to inform users of their health and wellness and help them to make healthier decisions, yet, little technology provides health assessments through sounds, which can be even more pervasive than visual displays. We developed a method to generate music from user preferences and evaluated it in a two-step user study. In the first step, we acquired general healthiness impressions from each user. In the second step, we generated customized melodies from music preferences in the first step to capture participants' perceived healthiness of those melodies. We deployed our surveys for 55 participants to complete on their own over 31 days. We analyzed the data to understand commonalities and differences in users' perceptions of music as an expression of health. Our findings show the existence of clear associations between perceived healthiness and different music features. We provide useful insights into how different musical features impact the perceived healthiness of music, how perceptions of healthiness vary between users, what trends exist between users' impressions, and what influences (or does not influence) a user's perception of healthiness in a melody. Overall, our results indicate validity in presenting health data through personalized music models. The findings can inform the design of behavior management applications on personal and ubiquitous devices.more » « less
An official website of the United States government

