skip to main content

Title: A User Study of a Wearable System to Enhance Bystanders’ Facial Privacy
The privacy of users and information are becoming increasingly important with the growth and pervasive use of mobile devices such as wearables, mobile phones, drones, and Internet of Things (IoT) devices. Today many of these mobile devices are equipped with cameras which enable users to take pictures and record videos anytime they need to do so. In many such cases, bystanders’ privacy is not a concern, and as a result, audio and video of bystanders are often captured without their consent. We present results from a user study in which 21 participants were asked to use a wearable system called FacePET developed to enhance bystanders’ facial privacy by providing a way for bystanders to protect their own privacy rather than relying on external systems for protection. While past works in the literature focused on privacy perceptions of bystanders when photographed in public/shared spaces, there has not been research with a focus on user perceptions of bystander-based wearable devices to enhance privacy. Thus, in this work, we focus on user perceptions of the FacePET device and/or similar wearables to enhance bystanders’ facial privacy. In our study, we found that 16 participants would use FacePET or similar devices to enhance their facial more » privacy, and 17 participants agreed that if smart glasses had features to conceal users’ identities, it would allow them to become more popular. « less
Authors:
; ; ; ;
Award ID(s):
1950416
Publication Date:
NSF-PAR ID:
10205752
Journal Name:
IoT
Volume:
1
Issue:
2
Page Range or eLocation-ID:
198 to 217
ISSN:
2624-831X
Sponsoring Org:
National Science Foundation
More Like this
  1. Wearable devices are a popular class of portable ubiquitous technology. These devices are available in a variety of forms, ranging from smart glasses to smart rings. The fact that smart wearable devices are attached to the body makes them particularly suitable to be integrated into people’s daily lives. Thus, we propose that wearables can be particularly useful to help people make sense of different kinds of information and situations in the course of their everyday activities, in other words, to help support learning in everyday life. Further, different forms of wearables have different affordances leading to varying perceptions and preferences,more »depending on the purpose and context of use. While there is research on wearable use in the learning context, it is mostly limited to specific settings and usually only explores wearable use for a specific task. This paper presents an online survey with 70 participants conducted to understand users’ preferences and perceptions of how wearables may be used to support learning in their everyday life. Multiple ways of use of wearable for learning were proposed. Asking for information was the most common learning-oriented use. The smartwatch/wristband, followed by the smart glasses, was the most preferred wearable form factor to support learning. Our survey results also showed that the choice of wearable type to use for learning is associated with prior wearable experience and that perceived social influence of wearables decreases significantly with gain in the experience with a fitness tracker. Overall, our study indicates that wearable devices have untapped potential to be used for learning in daily life and different form factors are perceived to afford different functions and used for different purposes.« less
  2. The proliferation of the Internet of Things (IoT) has started transforming our lifestyle through automation of home appliances. However, there are users who are hesitant to adopt IoT devices due to various privacy and security concerns. In this paper, we elicit peoples’ attitude and concerns towards adopting IoT devices. We conduct an online survey and collect responses from 232 participants from three different geographic regions (United States, Europe, and India); the participants consist of both adopters and non-adopters of IoT devices. Through data analysis, we determine that there are both similarities and differences in perceptions and concerns between adopters andmore »non-adopters. For example, even though IoT and non-IoT users share similar security and privacy concerns, IoT users are more comfortable using IoT devices in private settings compared to non-IoT users. Furthermore, when comparing users’ attitude and concerns across different geographic regions, we found similarities between participants from the US and Europe, yet participants from India showcased contrasting behavior. For instance, we found that participants from India were more trusting in their government to properly protect consumer data and were more comfortable using IoT devices in a variety of public settings, compared to participants from the US and Europe. Based on our findings, we provide recommendations to reduce users’ concerns in adopting IoT devices, and thereby enhance user trust towards adopting IoT devices.« less
  3. Raynal, Ann M. ; Ranney, Kenneth I. (Ed.)
    Most research in technologies for the Deaf community have focused on translation using either video or wearable devices. Sensor-augmented gloves have been reported to yield higher gesture recognition rates than camera-based systems; however, they cannot capture information expressed through head and body movement. Gloves are also intrusive and inhibit users in their pursuit of normal daily life, while cameras can raise concerns over privacy and are ineffective in the dark. In contrast, RF sensors are non-contact, non-invasive and do not reveal private information even if hacked. Although RF sensors are unable to measure facial expressions or hand shapes, which wouldmore »be required for complete translation, this paper aims to exploit near real-time ASL recognition using RF sensors for the design of smart Deaf spaces. In this way, we hope to enable the Deaf community to benefit from advances in technologies that could generate tangible improvements in their quality of life. More specifically, this paper investigates near real-time implementation of machine learning and deep learning architectures for the purpose of sequential ASL signing recognition. We utilize a 60 GHz RF sensor which transmits a frequency modulation continuous wave (FMWC waveform). RF sensors can acquire a unique source of information that is inaccessible to optical or wearable devices: namely, a visual representation of the kinematic patterns of motion via the micro-Doppler signature. Micro-Doppler refers to frequency modulations that appear about the central Doppler shift, which are caused by rotational or vibrational motions that deviate from principle translational motion. In prior work, we showed that fractal complexity computed from RF data could be used to discriminate signing from daily activities and that RF data could reveal linguistic properties, such as coarticulation. We have also shown that machine learning can be used to discriminate with 99% accuracy the signing of native Deaf ASL users from that of copysigning (or imitation signing) by hearing individuals. Therefore, imitation signing data is not effective for directly training deep models. But, adversarial learning can be used to transform imitation signing to resemble native signing, or, alternatively, physics-aware generative models can be used to synthesize ASL micro-Doppler signatures for training deep neural networks. With such approaches, we have achieved over 90% recognition accuracy of 20 ASL signs. In natural environments, however, near real-time implementations of classification algorithms are required, as well as an ability to process data streams in a continuous and sequential fashion. In this work, we focus on extensions of our prior work towards this aim, and compare the efficacy of various approaches for embedding deep neural networks (DNNs) on platforms such as a Raspberry Pi or Jetson board. We examine methods for optimizing the size and computational complexity of DNNs for embedded micro-Doppler analysis, methods for network compression, and their resulting sequential ASL recognition performance.« less
  4. Background The use of wearables facilitates data collection at a previously unobtainable scale, enabling the construction of complex predictive models with the potential to improve health. However, the highly personal nature of these data requires strong privacy protection against data breaches and the use of data in a way that users do not intend. One method to protect user privacy while taking advantage of sharing data across users is federated learning, a technique that allows a machine learning model to be trained using data from all users while only storing a user’s data on that user’s device. By keeping datamore »on users’ devices, federated learning protects users’ private data from data leaks and breaches on the researcher’s central server and provides users with more control over how and when their data are used. However, there are few rigorous studies on the effectiveness of federated learning in the mobile health (mHealth) domain. Objective We review federated learning and assess whether it can be useful in the mHealth field, especially for addressing common mHealth challenges such as privacy concerns and user heterogeneity. The aims of this study are to describe federated learning in an mHealth context, apply a simulation of federated learning to an mHealth data set, and compare the performance of federated learning with the performance of other predictive models. Methods We applied a simulation of federated learning to predict the affective state of 15 subjects using physiological and motion data collected from a chest-worn device for approximately 36 minutes. We compared the results from this federated model with those from a centralized or server model and with the results from training individual models for each subject. Results In a 3-class classification problem using physiological and motion data to predict whether the subject was undertaking a neutral, amusing, or stressful task, the federated model achieved 92.8% accuracy on average, the server model achieved 93.2% accuracy on average, and the individual model achieved 90.2% accuracy on average. Conclusions Our findings support the potential for using federated learning in mHealth. The results showed that the federated model performed better than a model trained separately on each individual and nearly as well as the server model. As federated learning offers more privacy than a server model, it may be a valuable option for designing sensitive data collection methods.« less
  5. Pervasive sensing has enabled continuous monitoring of user physiological state through mobile and wearable devices, allowing for large scale user studies to be conducted, such as those found in mHealth. However, current mHealth studies are limited in their ability of allowing users to express their privacy preferences on the data they share across multiple entities involved in a research study. In this work, we present mPolicy, a privacy policy language for study participants to express the context-aware and data-handling policies needed for mHealth. In addition, we provide a privacy-adaptive policy creation mechanism for byproduct data (such as motion inferences). Lastly,more »we create a software library called privLib for implementing parsing, enforcement, and policy creation on byproduct data for mPolicy. We evaluate the latency overhead of these operations, and discuss future improvements for scaling to realistic mHealth scenarios.« less