skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Two AI Truths and a Lie
Industry will take everything it can in developing artificial intelligence (AI) systems. We will get used to it. This will be done for our benefit. Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly. In this Essay, I argue that no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good. Given these inevitabilities, lawmakers will need to change their usual approach to regulating technology. Procedural approaches requiring transparency and consent will not be enough. Merely regulating use of data ignores how information collection and the affordances of tools bestow and exercise power. A better approach involves duties, design rules, defaults, and data dead ends. This layered approach will more squarely address dangerous extraction, harmful normalization, and adversarial self-dealing to better ensure that AI deployments advance the public good.  more » « less
Award ID(s):
1956393
PAR ID:
10638373
Author(s) / Creator(s):
Publisher / Repository:
Yale Journal of Law & Technology
Date Published:
Journal Name:
Yale Journal of Law & Technology
Volume:
26
Issue:
3
ISSN:
2766-2403
Page Range / eLocation ID:
595
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Lawmakers have started to regulate “dark patterns,” understood to be design practices meant to influence technology users’ decisions through manipulative or deceptive means. Most agree that dark patterns are undesirable, but open questions remain as to which design choices should be subjected to scrutiny, much less the best way to regulate them. In this Article, we propose adapting the concept of dark patterns to better fit legal frameworks. Critics allege that the legal conceptualizations of dark patterns are overbroad, impractical, and counterproductive. We argue that law and policy conceptualizations of dark patterns suffer from three deficiencies: First, dark patterns lack a clear value anchor for cases to build upon. Second, legal definitions of dark patterns overfocus on individuals and atomistic choices, ignoring de minimis aggregate harms and the societal implications of manipulation at scale. Finally, the law has struggled to articulate workable legal thresholds for wrongful dark patterns. To better regulate the designs called dark patterns, lawmakers need a better conceptual framing that bridges the gap between design theory and the law’s need for clarity, flexibility, and compatibility with existing frameworks. We argue that wrongful self-dealing is at the heart of what most consider to be “dark” about certain design patterns. Taking advantage of design affordances to the detriment of a vulnerable party is disloyal. To that end, we propose disloyal design as a regulatory framing for dark patterns. In drawing from established frameworks that prohibit wrongful self-dealing, we hope to provide more clarity and consistency for regulators, industry, and users. Disloyal design will fit better into legal frameworks and better rally public support for ensuring that the most popular tools in society are built to prioritize human values. 
    more » « less
  2. Ensuring that AI systems do what we, as humans, actually want them to do, is one of the biggest open research challenges in AI alignment and safety. My research seeks to directly address this challenge by enabling AI systems to interact with humans to learn aligned and robust behaviors. The way in which robots and other AI systems behave is often the result of optimizing a reward function. However, manually designing good reward functions is highly challenging and error prone, even for domain experts. Consider trying to write down a reward function that describes good driving behavior or how you like your bed made in the morning. While reward functions for these tasks are difficult to manually specify, human feedback in the form of demonstrations or preferences are often much easier to obtain. However, human data is often difficult to interpret, due to ambiguity and noise. Thus, it is critical that AI systems take into account epistemic uncertainty over the human's true intent. My talk will give an overview of my lab's progress along the following fundamental research areas: (1) efficiently maintaining uncertainty over human intent, (2) directly optimizing behavior to be robust to uncertainty over human intent, and (3) actively querying for additional human input to reduce uncertainty over human intent. 
    more » « less
  3. Abstract Increasingly, laws are being proposed and passed by governments around the world to regulate artificial intelligence (AI) systems implemented into the public and private sectors. Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them. Yet, almost all AI governance documents to date have a significant drawback: they have focused on what to do (or what not to do) with respect to making AI systems transparent, but have left the brunt of the work to technologists to figure out how to build transparent systems. We fill this gap by proposing a stakeholder-first approach that assists technologists in designing transparent, regulatory-compliant systems. We also describe a real-world case study that illustrates how this approach can be used in practice. 
    more » « less
  4. The BoF will address four questions: Q1: What are novel modes of computing implied by AI? Which will have the greatest impact on CI for gateways and national compute resources? Facilitated by Joe Stubbs Q2: What are or should be the goals of broadening access to compute resources for AI purposes? Who can be brought into the community that previously was not? Facilitated by Sandra Gesing Q3: What are the interesting aspects related to data sharing? How do the various compute loads and modes impact how data are shared, moved, and accessed? Facilitated by Rob Quick Q4: How can AI be used in a science gateway to make it more effective, efficient, secure, or otherwise better? Facilitated by Claire Stirm 
    more » « less
  5. Deepfake videos are getting better in quality and can be used for dangerous disinformation campaigns. The pressing need to detect these videos has motivated researchers to develop different types of detection models. Among them, the models that utilize temporal information (i.e., sequence-based models) are more effective at detection than the ones that only detect intra-frame discrepancies. Recent work has shown that the latter detection models can be fooled with adversarial examples, leveraging the rich literature on crafting adversarial (still) images. It is less clear, however, how well these attacks will work on sequence-based models that operate on information taken over multiple frames. In this paper, we explore the effectiveness of the Fast Gradient Sign Method (FGSM) and the Carlini-Wagner 𝐿2-norm attack to fool sequence-based deepfake detector models in both the white-box and black-box settings. The experimental results show that the attacks are effective with a maximum success rate of 99.72% and 67.14% in the white-box and black-box attack scenarios, respectively. This highlights the importance of developing more robust sequence-based deepfake detectors and opens up directions for future research. 
    more » « less