skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Off is Not Off: On the Security of Parked Vehicles
Award ID(s):
1646130
PAR ID:
10200054
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 IEEE Conference on Communications and Network Security (CNS)
Page Range / eLocation ID:
1 to 9
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We answer the question: If a vacuum sector Hamiltonian is regularized by an energy cutoff, how is the one-kink sector Hamiltonian regularized? We find that it is not regularized by an energy cutoff, indeed normal modes of all energies are present in the kink Hamiltonian, but rather the decomposition of the field into normal mode operators yields coefficients which lie on a constrained surface that forces them to become small for energies above the cutoff. This explains the old observation that an energy cutoff of the kink Hamiltonian leads to an incorrect one-loop kink mass. To arrive at our conclusion, we impose that the regularized kink sector Hamiltonian is unitarily equivalent to the regularized vacuum sector Hamiltonian. This condition implies that the two regularized Hamiltonians have the same spectrum and so guarantees that the kink Hamiltonian yields the correct kink mass. 
    more » « less
  2. Bringmann, Karl; Grohe, Martin; Puppis, Gabriele; Svensson, Ola (Ed.)
    The splitting-off operation in undirected graphs is a fundamental reduction operation that detaches all edges incident to a given vertex and adds new edges between the neighbors of that vertex while preserving their degrees. Lovász [Lov{á}sz, 1974; Lov{á}sz, 1993] and Mader [Mader, 1978] showed the existence of this operation while preserving global and local connectivities respectively in graphs under certain conditions. These results have far-reaching applications in graph algorithms literature [Lovász, 1976; Mader, 1978; Frank, 1993; Frank and Király, 2002; Király and Lau, 2008; Frank, 1992; Goemans and Bertsimas, 1993; Frank, 1994; Bang-Jensen et al., 1995; Frank, 2011; Nagamochi and Ibaraki, 2008; Nagamochi et al., 1997; Henzinger and Williamson, 1996; Goemans, 2001; Jordán, 2003; Kriesell, 2003; Jain et al., 2003; Chan et al., 2011; Bhalgat et al., 2008; Lau, 2007; Chekuri and Shepherd, 2008; Nägele and Zenklusen, 2020; Blauth and Nägele, 2023]. In this work, we introduce a splitting-off operation in hypergraphs. We show that there exists a local connectivity preserving complete splitting-off in hypergraphs and give a strongly polynomial-time algorithm to compute it in weighted hypergraphs. We illustrate the usefulness of our splitting-off operation in hypergraphs by showing two applications: (1) we give a constructive characterization of k-hyperedge-connected hypergraphs and (2) we give an alternate proof of an approximate min-max relation for max Steiner rooted-connected orientation of graphs and hypergraphs (due to Király and Lau [Király and Lau, 2008]). Our proof of the approximate min-max relation for graphs circumvents the Nash-Williams' strong orientation theorem and uses tools developed for hypergraphs. 
    more » « less
  3. It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off. As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching the system off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation. This is not caused by a built-in instinct, but because a rational agent will maximize expected utility and cannot achieve whatever objective it has been given if it is dead. Our goal is to study the incentives an agent has to allow itself to be switched off. We analyze a simple game between a human H and a robot R, where H can press R’s off switch but R can disable the off switch. A traditional agent takes its reward function for granted: we show that such agents have an incentive to disable the off switch, except in the special case where H is perfectly rational. Our key insight is that for R to want to preserve its off switch, it needs to be uncertain about the utility associated with the outcome, and to treat H’s actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents. 
    more » « less
  4. null (Ed.)