skip to main content


Title: VerHealth: Vetting Medical Voice Applications through Policy Enforcement
Healthcare applications on Voice Personal Assistant System (e.g., Amazon Alexa), have shown a great promise to deliver personalized health services via a conversational interface. However, concerns are also raised about privacy, safety, and service quality. In this paper, we propose VerHealth, to systematically assess health-related applications on Alexa for how well they comply with existing privacy and safety policies. VerHealth contains a static module and a dynamic module based on machine learning that can trigger and detect violation behaviors hidden deep in the interaction threads. We use VerHealth to analyze 813 health-related applications on Alexa by sending over 855,000 probing questions and analyzing 863,000 responses. We also consult with three medical school students (domain experts) to confirm and assess the potential violations. We show that violations are quite common, e.g., 86.36% of them miss disclaimers when providing medical information; 30.23% of them store user physical or mental health data without approval. Domain experts believe that the applications' medical suggestions are often factually-correct but are of poor relevance, and applications should have asked more questions before providing suggestions for over half of the cases. Finally, we use our results to discuss possible directions for improvements.  more » « less
Award ID(s):
2030521 1920462 1943100
NSF-PAR ID:
10232909
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
4
Issue:
4
ISSN:
2474-9567
Page Range / eLocation ID:
1 to 21
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Amazon's voice-based assistant, Alexa, enables users to directly interact with various web services through natural language dialogues. It provides developers with the option to create third-party applications (known as Skills) to run on top of Alexa. While such applications ease users' interaction with smart devices and bolster a number of additional services, they also raise security and privacy concerns due to the personal setting they operate in. This paper aims to perform a systematic analysis of the Alexa skill ecosystem. We perform the first large-scale analysis of Alexa skills, obtained from seven different skill stores totaling to 90,194 unique skills. Our analysis reveals several limitations that exist in the current skill vetting process. We show that not only can a malicious user publish a skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information. We, next, formalize the different skill-squatting techniques and evaluate the efficacy of such techniques. We find that while certain approaches are more favorable than others, there is no substantial abuse of skill squatting in the real world. Lastly, we study the prevalence of privacy policies across different categories of skill, and more importantly the policy content of skills that use the Alexa permission model to access sensitive user data. We find that around 23.3% of such skills do not fully disclose the data types associated with the permissions requested. We conclude by providing some suggestions for strengthening the overall ecosystem, and thereby enhance transparency for end-users. 
    more » « less
  2. The rapid adoption of Internet-of-Medical-Things (IoMT) has revolutionized e-health systems, particularly in remote patient monitoring. With the growing adoption of Internet-of-Medical-Things (IoMT) in delivering technologically advanced health services, the security of Medtronic devices is pivotal as the security and privacy of data from these devices are directly related to patient safety. PUF has been the most widely adopted hardware security primitive which has been successfully integrated with various Internet-of-Things (IoT) based applications, particularly in smart healthcare for facilitating device security. To facilitate security and access control to IoMT devices, this work proposes a novel cybersecurity solution using PUF for facilitating global access to IoMT devices. The proposed framework presents an approach that enables the patient’s body area network devices supported by PUF to be securely accessible and controllable globally. The proposed cybersecurity solution has been experimentally validated using state-of-the-art SRAM PUF, a delay based PUF, and a trusted platform module (TPM) primitive. 
    more » « less
  3. Within the ongoing disruption of the COVID-19 pandemic, technologically mediated health surveillance programs have vastly intensified and expanded to new spaces. Popular understandings of medical and health data protections came into question as a variety of institutions introduced new tools for symptom tracking, contact tracing, and the management of related data. These systems have raised complex questions about who should have access to health information, under what circumstances, and how people and institutions negotiate relationships between privacy, public safety, and care during times of crisis. In this paper, we take up the case of a large public university working to keep campus productive during COVID-19 through practices of placemaking, symptom screeners, and vaccine mandate compliance databases. Drawing on a multi-methods study including thirty-eight interviews, organizational documents, and discursive analysis, we show where and for whom administrative care infrastructures either misrecognized or torqued (Bowker and Star 1999) the care relationships that made life possible for people in the university community. We argue that an analysis of care—including the social relations that enable it and those that attempt to hegemonically define it—opens important questions for how people relate to data they produce about their bodies as well as to the institutions that manage them. Furthermore, we argue that privacy frameworks that rely on individual rights, essential categories of “sensitive information,” or the normative legitimacy of institutional practices are not equipped to reveal how people negotiate privacy and care in times of crisis. 
    more » « less
  4. null (Ed.)
    Security of deep neural network (DNN) inference engines, i.e., trained DNN models on various platforms, has become one of the biggest challenges in deploying artificial intelligence in domains where privacy, safety, and reliability are of paramount importance, such as in medical applications. In addition to classic software attacks such as model inversion and evasion attacks, recently a new attack surface---implementation attacks which include both passive side-channel attacks and active fault injection and adversarial attacks---is arising, targeting implementation peculiarities of DNN to breach their confidentiality and integrity. This paper presents several novel passive and active attacks on DNN we have developed and tested over medical datasets. Our new attacks reveal a largely under-explored attack surface of DNN inference engines. Insights gained during attack exploration will provide valuable guidance for effectively protecting DNN execution against reverse-engineering and integrity violations. 
    more » « less
  5. The quality of data is extremely important for data analytics. Data quality tests typically involve checking constraints specified by domain experts. Existing approaches detect trivial constraint violations and identify outliers without explaining the constraints that were violated. Moreover, domain experts may specify constraints in an ad hoc manner and miss important ones. We describe an automated data quality test approach, ADQuaTe2, which uses an autoencoder to (1) discover constraints that may have been missed by experts, (2) label as suspicious those records that violate the constraints, and (3) provide explanations about the violations. An interactive learning technique incorporates expert feedback, which improves the accuracy. We evaluate the effectiveness of ADQuaTe2 on real-world datasets from health and plant domains. We also use datasets from the UCI repository to evaluate the improvement in the accuracy after incorporating ground truth knowledge. 
    more » « less