Increased use of technology in schools raises new privacy and security challenges for K-12 students---and harms such as commercialization of student data, exposure of student data in security breaches, and expanded tracking of students---but the extent of these challenges is unclear. In this paper, first, we interviewed 18 school officials and IT personnel to understand what educational technologies districts use and how they manage student privacy and security around these technologies. Second, to determine if these educational technologies are frequently endorsed across United States (US) public schools, we compiled a list of linked educational technology websites scraped from 15,573 K-12 public school/district domains and analyzed them for privacy risks. Our findings suggest that administrators lack resources to properly assess privacy and security issues around educational technologies even though they do pose potential privacy issues. Based on these findings, we make recommendations for policymakers, educators, and the CHI research community.
more »
« less
Virtual Classrooms and Real Harms: Remote Learning at U.S. Universities
Universities have been forced to rely on remote educational technology to facilitate the rapid shift to online learning. In doing so, they acquire new risks of security vulnerabilities and privacy violations. To help universities navigate this landscape, we develop a model that describes the actors, incentives, and risks, informed by surveying 105 educators and 10 administrators. Next, we develop a methodology for administrators to assess security and privacy risks of these products. We then conduct a privacy and security analysis of 23 popular platforms using a combination of sociological analyses of privacy policies and 129 state laws, alongside a technical assessment of platform software. Based on our findings, we develop recommendations for universities to mitigate the risks to their stakeholders.
more »
« less
- Award ID(s):
- 1801316
- PAR ID:
- 10289268
- Date Published:
- Journal Name:
- Symposium on Usable Privacy and Security
- ISSN:
- 1949-6532
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Consumer Internet of Things (IoT) devices are increasingly common, from smart speakers to security cameras, in homes. Along with their benefits come potential privacy and security threats. To limit these threats a number of commercial services have become available (IoT safeguards). The safeguards claim to provide protection against IoT privacy risks and security threats. However, the effectiveness and the associated privacy risks of these safeguards remains a key open question. In this paper, we investigate the threat detection capabilities of IoT safeguards for the first time. We develop and release an approach for automated safeguards experimentation to reveal their response to common security threats and privacy risks. We perform thousands of automated experiments using popular commercial IoT safeguards when deployed in a large IoT testbed. Our results indicate not only that these devices may be ineffective in preventing risks, but also their cloud interactions and data collection operations may introduce privacy risks for the households that adopt them.more » « less
-
Trustworthy data repositories ensure the security of their collections. We argue they should also ensure the security of researcher and human subject data. Here we demonstrate the use of a privacy impact assessment (PIA) to evaluate potential privacy risks to researchers using the ICPSR’s Open Badges Research Credential System as a case study. We present our workflow and discuss potential privacy risks and mitigations for those risks. [This paper is a conference pre-print presented at IDCC 2020 after lightweight peer review.]more » « less
-
Children are exposed to technology at home and school at very young ages, often using family mobile devices and educational apps. It is therefore critical that they begin learning about privacy and security concepts during their elementary school years, rather than waiting until they are older. Such skills will help children navigate an increasingly connected world and develop agency over their personal data, online interactions, and online security. In this paper, we explore how a simple technique---a ''Would Your Rather'' (WYR) game involving hypothetical privacy and security scenarios---can support children in working through the nuances of these types of situations and how educators can leverage this approach to support children's privacy and security learning. We conducted three focus groups with 21 children aged 7-12 using the WYR activity and interviewed 13 elementary school teachers about the use of WYR for facilitating privacy and security learning. We found that WYR provided a meaningful opportunity for children to assess privacy and security risks, consider some of the social and emotional aspects of privacy and security dilemmas, and assert their agency in a manner typically unavailable to children in an adult-centric society. Teachers highlighted connections between privacy and security dilemmas and children's social and emotional learning and offered additional insights about using this WYR technique in and beyond their classrooms. Based on these findings, we highlight four opportunities for using WYR to support children in engaging with privacy and security concepts from an early age.more » « less
-
Explainability is increasingly recognized as an enabling technology for the broader adoption of machine learning (ML), particularly for safety-critical applications. This has given rise to explainable ML, which seeks to enhance the explainability of neural networks through the use of explanators. Yet, the pursuit for better explainability inadvertently leads to increased security and privacy risks. While there has been considerable research into the security risks of explainable ML, its potential privacy risks remain under-explored. To bridge this gap, we present a systematic study of privacy risks in explainable ML through the lens of membership inference. Building on the observation that, besides the accuracy of the model, robustness also exhibits observable differences among member samples and non-member samples, we develop a new membership inference attack. This attack extracts additional membership features from changes in model confidence under different levels of perturbations guided by the importance highlighted by the attribution maps in the explanators. Intuitively, perturbing important features generally results in a bigger loss in confidence for member samples. Using the member-non-member differences in both model performance and robustness, an attack model is trained to distinguish the membership. We evaluated our approach with seven popular explanators across various benchmark models and datasets. Our attack demonstrates there is non-trivial privacy leakage in current explainable ML methods. Furthermore, such leakage issue persists even if the attacker lacks the knowledge of training datasets or target model architectures. Lastly, we also found existing model and output-based defense mechanisms are not effective in mitigating this new attack.more » « less
An official website of the United States government

