Mobile health applications and devices (“mobile health apps”) play increasingly important roles in the lives of individuals interested in self-regulating their personal health behaviors. While some appear to be simply consumer products and services, many are embedded in regulatory programs aimed at compliance with expert guidelines. In this paper, we draw on de Vaujany et al.’s framework for organizational IT-based regulation systems to consider how systems operate in open and distributed contexts in which actors have strong agency and regulation is indirect and voluntary. To do so, we consider how IT artifacts become embedded in practices, how data are implicated in regulatory feedback loops, and how individual, organizational and technological actors are mobilized and with what regulatory outcomes. We develop an instrumental case study as a vignette of five regulatory episodes (continuous glucose monitoring systems used by persons with diabetes) to examine how expert rules materialized in mobile health apps, data about bodily states, and IT features such as displays and alarms “nudge” individuals towards compliance with self-regulatory guidelines and practices. Through this analysis, we identify two related regulatory affordances of mobile health apps for predicting and surveilling personal health. We theorize how multilevel networks composed of trifecta of rules, IT artifacts, and practices develop as a regulatory lattice through which social regulation is realized. We conclude by considering the broader implications of this analytical approach to study voluntary, data-enriched regulatory systems.
more » « less- PAR ID:
- 10402847
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Journal of Information Technology
- Volume:
- 38
- Issue:
- 2
- ISSN:
- 0268-3962
- Format(s):
- Medium: X Size: p. 108-125
- Size(s):
- p. 108-125
- Sponsoring Org:
- National Science Foundation
More Like this
-
Patient-generated health data (PGHD), created and captured from patients via wearable devices and mobile apps, are proliferating outside of clinical settings. Examples include sleep tracking, fitness trackers, continuous glucose monitors, and RFID-enabled implants, with many additional biometric or health surveillance applications in development or envisioned. These data are included in growing stockpiles of personal health data being mined for insight via big data analytics and artificial intelligence/deep learning technologies. Governing these data resources to facilitate patient care and health research while preserving individual privacy and autonomy will be challenging, as PGHD are the least regulated domains of digitalized personal health data (U.S. Department of Health and Human Services, 2018). When patients themselves collect digitalized PGHD using “apps” provided by technology firms, these data fall outside of conventional health data regulation, such as HIPAA. Instead, PGHD are maintained primarily on the information technology infrastructure of vendors, and data are governed under the IT firm’s own privacy policies and within the firm’s intellectual property rights. Dominant narratives position these highly personal data as valuable resources to transform healthcare, stimulate innovation in medical research, and engage individuals in their health and healthcare. However, ensuring privacy, security, and equity of benefits from PGHD will be challenging. PGHD can be aggregated and, despite putative “deidentification,” be linked with other health, economic, and social data for predictive analytics. As large tech companies enter the healthcare sector (e.g., Google Health is partnering with Ascension Health to analyze the PHI of millions of people across 21 U.S. states), the lack of harmonization between regulatory regimes may render existing safeguards to preserve patient privacy and control over their PHI ineffective. While healthcare providers are bound to adhere to health privacy laws, Big Tech comes under more relaxed regulatory regimes that will facilitate monetizing PGHD. We explore three existing data protection regimes relevant to PGHD in the United States that are currently in tension with one another: federal and state health-sector laws, data use and reuse for research and innovation, and industry self-regulation by large tech companies We then identify three types of structures (organizational, regulatory, technological/algorithmic), which synergistically could help enact needed regulatory oversight while limiting the friction and economic costs of regulation. This analysis provides a starting point for further discussions and negotiations among stakeholders and regulators to do so.more » « less
-
Patient-generated health data (PGHD), created and captured from patients via wearable devices and mobile apps, are proliferating outside of clinical settings. Examples include sleep tracking, fitness trackers, continuous glucose monitors, and RFID-enabled implants, with many additional biometric or health surveillance applications in development or envisioned. These data are included in growing stockpiles of personal health data being mined for insight via big data analytics and artificial intelligence/deep learning technologies. Governing these data resources to facilitate patient care and health research while preserving individual privacy and autonomy will be challenging, as PGHD are the least regulated domains of digitalized personal health data (U.S. Department of Health and Human Services, 2018). When patients themselves collect digitalized PGHD using “apps” provided by technology firms, these data fall outside of conventional health data regulation, such as HIPAA. Instead, PGHD are maintained primarily on the information technology infrastructure of vendors, and data are governed under the IT firm’s own privacy policies and within the firm’s intellectual property rights. Dominant narratives position these highly personal data as valuable resources to transform healthcare, stimulate innovation in medical research, and engage individuals in their health and healthcare. However, ensuring privacy, security, and equity of benefits from PGHD will be challenging. PGHD can be aggregated and, despite putative “deidentification,” be linked with other health, economic, and social data for predictive analytics. As large tech companies enter the healthcare sector (e.g., Google Health is partnering with Ascension Health to analyze the PHI of millions of people across 21 U.S. states), the lack of harmonization between regulatory regimes may render existing safeguards to preserve patient privacy and control over their PHI ineffective. While healthcare providers are bound to adhere to health privacy laws, Big Tech comes under more relaxed regulatory regimes that will facilitate monetizing PGHD. We explore three existing data protection regimes relevant to PGHD in the United States that are currently in tension with one another: federal and state health-sector laws, data use and reuse for research and innovation, and industry self-regulation by large tech companies We then identify three types of structures (organizational, regulatory, technological/algorithmic), which synergistically could help enact needed regulatory oversight while limiting the friction and economic costs of regulation. This analysis provides a starting point for further discussions and negotiations among stakeholders and regulators to do so.more » « less
-
Background: The health belief model suggests that individuals' beliefs affect behaviors associated with health. This study examined whether Ohioans' pre-existing medical health diagnoses affected their belief about personal health risk and their compliance with social distancing during the coronavirus disease 2019 (COVID-19) pandemic. Prior research examining physical and mental diagnoses and social distancing compliance is nearly nonexistent. We examined whether physical and mental health diagnoses influenced individuals' beliefs that their health is at risk and their adherence with social distancing guidelines. Methods: The study used longitudinal cohort data from the Toledo Adolescent Relationships Study (TARS) (n = 790), which surveyed Ohioans prior to and during the COVID-19 pandemic. Dependent variables included belief that an individual's own health was at risk and social distancing compliance. Independent variables included physical and mental health diagnoses, pandemic-related factors (fear of COVID-19, political beliefs about the pandemic, friends social distance, family social distance, COVID-19 exposure), and sociodemographic variables (age, gender, race/ethnicity, educational level). Results: Individuals who had a pre-existing physical health diagnosis were more likely to believe that their personal health was at risk during the pandemic but were not more likely to comply with social distancing guidelines. In contrast, individuals who had a pre-existing mental health diagnosis were more compliant with social distancing guidelines but were not more likely to believe their personal health was at risk. Individuals who expressed greater fear of COVID-19 believed their health is more at risk than those who expressed lower levels of fear. Conclusion: Health considerations are important to account for in assessments of responses to the pandemic, beliefs about personal health risk, and social distancing behavior. Additional research is needed to understand the divergence in the findings regarding physical health, beliefs about personal health risk, and social distancing compliance. Further, research is needed to understand how mental health issues impact decision-making related to social distancing compliance.more » « less
-
Recently, there has been a proliferation of personal health applications describing to use Artificial Intelligence (AI) to assist health consumers in making health decisions based on their data and algorithmic outputs. However, it is still unclear how such descriptions influence individuals' perceptions of such apps and their recommendations. We therefore investigate how current AI descriptions influence individuals' attitudes towards algorithmic recommendations in fertility self-tracking through a simulated study using three versions of a fertility app. We found that participants preferred AI descriptions with explanation, which they perceived as more accurate and trustworthy. Nevertheless, they were unwilling to rely on these apps for high-stakes goals because of the potential consequences of a failure. We then discuss the importance of health goals for AI acceptance, how literacy and assumptions influence perceptions of AI descriptions and explanations, and the limitations of transparency in the context of algorithmic decision-making for personal health.more » « less
-
null (Ed.)This article examines how ignorance can be produced by regulatory systems. Using the case of contamination from per- and polyfluoroalkyl substances (PFAS), we identify patterns of institutionalized ignorance in U.S. chemical regulation. Drawing on in-depth interviews and archival research, we develop a chemical regulatory pathway approach to study knowledge and ignorance production through the regulatory framework, the Toxic Substances Control Act (TSCA). Investigating TSCA’s operation, we consider why PFAS were relatively recently recognized as a significant public health threat, despite evidence of their risks in the 1960s. The historical context of TSCA’s enactment, including the mobilization of the chemical industry, contributed to the institutionalization of organizational practices promoting distinct types of ignorance based on stakeholder position: chemical manufacturers who have discretion over knowledge production and dissemination, regulators who operate under selective ignorance, and communities and consumers who experience nescience, or total surprise.more » « less