- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- The Yale law journal
- Page Range or eLocation-ID:
- Sponsoring Org:
- National Science Foundation
More Like this
Intent-aware Permission Architecture: A Model for Rethinking Informed Consent for Android Apps [Intent-aware Permission Architecture: A Model for Rethinking Informed Consent for Android Apps]As data privacy continues to be a crucial human-right concern as recognized by the UN, regulatory agencies have demanded developers obtain user permission before accessing user-sensitive data. Mainly through the use of privacy policies statements, developers fulfill their legal requirements to keep users abreast of the requests for their data. In addition, platforms such as Android enforces explicit permission request using the permission model. Nonetheless, recent research has shown that service providers hardly make full disclosure when requesting data in these statements. Neither is the current permission model designed to provide adequate informed consent. Often users have no clear understanding of the reason and scope of usage of the data request. This paper proposes an unambiguous, informed consent process that provides developers with a standardized method for declaring Intent. Our proposed Intent-aware permission architecture extends the current Android permission model with a precise mechanism for full disclosure of purpose and scope limitation. The design of which is based on an ontology study of data requests purposes. The overarching objective of this model is to ensure end-users are adequately informed before making decisions on their data. Additionally, this model has the potential to improve trust between end-users and developers.
Consent is central to many of today’s most pressing social issues: What counts as sexual assault? Whom are the police allowed to search? Can they use people’s data like that? Yet despite the fact that consent is in many ways an inherently psychological phenomenon, it has not been a core topic of study in psychology. Although domain-specific research on consent—most commonly, informed consent and sexual consent—is regularly published in specialty journals (e.g., methods and sex-research journals), consent has been largely ignored as a generalizable psychological phenomenon. This has meant that consent has been mostly excluded from “mainstream” psychology as a core topic of study. This omission is particularly striking given that psychologists have paid broad attention to related constructs, such as compliance, obedience, persuasion, free will, and autonomy, and that scholars in other fields, such as law and philosophy, have paid considerably more attention to the topic of consent, despite its uniquely psychological qualities. In this article, I argue that psychologists should embrace consent—in particular, the subjective experience of consent—as a core topic of study.
What Happens When Robots Punish? Evaluating Human Task Performance During Robot-Initiated PunishmentThis article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment.
Browser users encounter a broad array of potentially intrusive practices: from behavioral profiling, to crypto-mining, fingerprinting, and more. We study people’s perception, awareness, understanding, and preferences to opt out of those practices. We conducted a mixed-methods study that included qualitative (n=186) and quantitative (n=888) surveys covering 8 neutrally presented practices, equally highlighting both their benefits and risks. Consistent with prior research focusing on specific practices and mitigation techniques, we observe that most people are unaware of how to effectively identify or control the practices we surveyed. However, our user-centered approach reveals diverse views about the perceived risks and benefits, and that the majority of our participants wished to both restrict and be explicitly notified about the surveyed practices. Though prior research shows that meaningful controls are rarely available, we found that many participants mistakenly assume opt-out settings are common but just too difficult to find. However, even if they were hypothetically available on every website, our findings suggest that settings which allow practices by default are more burdensome to users than alternatives which are contextualized to website categories instead. Our results argue for settings which can distinguish among website categories where certain practices are seen as permissible, proactively notify usersmore »
New consent management platforms (CMPs) have been introduced to the web to conform with the EU's General Data Protection Regulation, particularly its requirements for consent when companies collect and process users' personal data. This work analyses how the most prevalent CMP designs affect people's consent choices. We scraped the designs of the five most popular CMPs on the top 10,000 websites in the UK (n=680). We found that dark patterns and implied consent are ubiquitous; only 11.8% meet the minimal requirements that we set based on European law. Second, we conducted a field experiment with 40 participants to investigate how the eight most common designs affect consent choices. We found that notification style (banner or barrier) has no effect; removing the opt-out button from the first page increases consent by 22--23 percentage points; and providing more granular controls on the first page decreases consent by 8--20 percentage points. This study provides an empirical basis for the necessary regulatory action to enforce the GDPR, in particular the possibility of focusing on the centralised, third-party CMP services as an effective way to increase compliance.