Autonomous vehicles (AV) hold great potential to increase road safety, reduce traffic congestion, and improve mobility systems. However, the deployment of AVs introduces new liability challenges when they are involved in car accidents. A new legal framework should be developed to tackle such a challenge. This paper proposes a legal framework, incorporating liability rules to rear-end crashes in mixed-traffic platoons with AVs and human-propelled vehicles (HV). We leverage a matrix game approach to understand interactions among players whose utility captures crash loss for drivers according to liability rules. We investigate how liability rules may impact the game equilibrium between vehicles and whether human drivers’ moral hazards arise if liability is not designed properly. We find that compared to the no-fault liability rule, contributory and comparative rules make road users have incentives to execute a smaller reaction time to improve road safety. There exists moral hazards for human drivers when risk-averse AV players are in the car platoon.
This content will become publicly available on September 11, 2025
Towards Computational Foreseeability
This paper addresses the challenges of computational accountability in autonomous systems, particularly in Autonomous Vehicles (AVs), where safety and efficiency often conflict. We begin by examining current approaches such as cost minimization, reward maximization, human-centered approaches, and ethical frameworks, noting their limitations addressing these challenges. Foreseeability is a central concept in tort law that limits the accountability and legal liability of an actor to a reasonable scope. Yet, current data-driven methods to determine foreseeability are rigid, ignore uncertainty, and depend on simulation data. In this work, we advocate for a new computational approach to establish foreseeability of autonomous systems based on the legal “BPL” formula. We provide open research challenges, using fully autonomous vehicles as a motivating example, and call for researchers to help autonomous systems make accountable decisions in safety-critical scenarios.
more »
« less
- Award ID(s):
- 2131531
- PAR ID:
- 10544017
- Publisher / Repository:
- AAAI Conference on Artificial Intelligence
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Balancing performance and safety is crucial to deploying autonomous vehicles in multi-agent environments. In particular, autonomous racing is a domain that penalizes safe but conservative policies, highlighting the need for robust, adaptive strategies. Current approaches either make simplifying assumptions about other agents or lack robust mechanisms for online adaptation. This work makes algorithmic contributions to both challenges. First, to generate a realistic, diverse set of opponents, we develop a novel method for self-play based on replica-exchange Markov chain Monte Carlo. Second, we propose a distributionally robust bandit optimization procedure that adaptively adjusts risk aversion relative to uncertainty in beliefs about opponents’ behaviors. We rigorously quantify the tradeoffs in performance and robustness when approximating these computations in real-time motion-planning, and we demonstrate our methods experimentally on autonomous vehicles that achieve scaled speeds comparable to Formula One racecars.more » « less
-
null (Ed.)We understand sociotechnical systems (STSs) as uniting social and technical tiers to provide abstractions for capturing how autonomous principals interact with each other. Accountability is a foundational concept in STSs and an essential component of achieving ethical outcomes. In simple terms, accountability involves identifying who can call whom to account and who must provide an accounting of what and when. Although accountability is essential in any application involving autonomous parties, established methods don't support it. We formulate an accountability requirement as one where one principal is accountable to another regarding some conditional expectation. Our metamodel for STSs captures accountability requirements as relational constructs inspired from legal concepts, such as commitments, authorization, and prohibition. We apply our metamodel to a healthcare process and show how it helps address the problems of ineffective interaction identified in the original case study.more » « less
-
Autonomous vehicles (AVs) are on the verge of changing the transportation industry. Despite the fast development of autonomous driving systems (ADSs), they still face safety and security challenges. Current defensive approaches usually focus on a narrow objective and are bound to specific platforms, making them difficult to generalize. To solve these limitations, we propose AVMaestro, an efficient and effective policy enforcement framework for full-stack ADSs. AVMaestro includes a code instrumentation module to systematically collect required information across the entire ADS, which will then be feed into a centralized data examination module, where users can utilize the global information to deploy defensive methods to protect AVs from various threats. AVMaestro is evaluated on top of Apollo-6.0 and experimental results confirm that it can be easily incorporated into the original ADS with almost negligible run-time delay. We further demonstrate that utilizing the global information can not only improve the accuracy of existing intrusion detection methods, but also potentially inspire new security applications.more » « less
-
null (Ed.)Modeling is a significant piece of the puzzle in achieving safety certificates for distributed IoT and cyberphysical systems. From smart home devices to connected and autonomous vehicles, several modeling challenges like dynamic membership of participants and complex interaction patterns, span across application domains. Modeling multiple interacting vehicles can become unwieldy and impractical as vehicles change relative positions and lanes. In this paper, we present an egocentric abstraction for succinctly modeling local interactions among an arbitrary number of agents around an ego agent. These models abstract away the detailed behavior of the other agents and ignore present but physically distant agents. We show that this approach can capture interesting scenarios considered in the responsibility sensitive safety (RSS) framework for autonomous vehicles. As an illustration of how the framework can be useful for analysis, we prove safety of several highway driving scenarios using egocentric models. The proof technique also brings to the forefront the power of a classical verification approach, namely, inductive invariant assertions. We discuss possible generalizations of the analysis to other scenarios and applications.more » « less