Runtime monitoring is commonly used to detect the violation of desired properties in safety critical cyber-physical systems by observing its executions. Bauer et al. introduced an influential framework for monitoring Linear Temporal Logic (LTL) properties based on a three-valued semantics for a finite execution: the formula is already satisfied by the given execution, it is already violated, or it is still undetermined, i.e., it can still be satisfied and violated by appropriate extensions of the given execution. However, a wide range of formulas are not monitorable under this approach, meaning that there are executions for which satisfaction and violation will always remain undetermined no matter how it is extended. In particular, Bauer et al. report that 44% of the formulas they consider in their experiments fall into this category. Recently, a robust semantics for LTL was introduced to capture different degrees by which a property can be violated. In this paper we introduce a robust semantics for finite strings and show its potential in monitoring: every formula considered by Bauer et al. is monitorable under our approach. Furthermore, we discuss which properties that come naturally in LTL monitoring—such as the realizability of all truth values—can be transferred to themore »
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract -
This paper addresses the problem of decentralized learning in the presence of data poisoning attacks. In this problem, we consider a collection of nodes connected through a network, each equipped with a local function. The objective is to compute the global minimizer of the aggregated local functions, in a decentralized manner, i.e., each node can only use its local function and data exchanged with nodes it is connected to. Moreover, each node is to agree on the said minimizer despite an adversary that can arbitrarily change the local functions of a fraction of the nodes. This problem setting has applications in robust learning, where nodes in a network are collectively training a model that minimizes the empirical loss with possibly attacked local data sets. In this paper, we propose a novel decentralized learning algorithm that enables all nodes to reach consensus on the optimal model, in the absence of attacks, and approximate consensus in the presence of data poisoning attacks.
-
In this paper, we derive closed-form expressions for implicit controlled invariant sets for discrete-time controllable linear systems with measurable disturbances. In particular, a disturbance-reactive (or disturbance feedback) controller in the form of a parameterized finite automaton is considered. We show that, for a class of automata, the robust positively invariant sets of the corresponding closed-loop systems can be expressed by a set of linear inequality constraints in the joint space of system states and controller parameters. This leads to an implicit representation of the invariant set in a lifted space. We further show how the same parameterization can be used to compute invariant sets when the disturbance is not available for measurement.
-
For a class of Cyber-Physical Systems (CPSs), we address the problem of performing computations over the cloud without revealing private information about the structure and operation of the system. We model CPSs as a collection of input-output dynamical systems (the system operation modes). Depending on the mode the system is operating on, the output trajectory is generated by one of these systems in response to driving inputs. Output measurements and driving inputs are sent to the cloud for processing purposes. We capture this "processing" through some function (of the input-output trajectory) that we require the cloud to compute accurately - referred here as the trajectory utility. However, for privacy reasons, we would like to keep the mode private, i.e., we do not want the cloud to correctly identify what mode of the CPS produced a given trajectory. To this end, we distort trajectories before transmission and send the corrupted data to the cloud. We provide mathematical tools (based on output-regulation techniques) to properly design distorting mechanisms so that: 1) the original and distorted trajectories lead to the same utility; and the distorted data leads the cloud to misclassify the mode.
-
In this paper, we derive closed-form expressions for implicit controlled invariant sets for discrete-time controllable linear systems with measurable disturbances. In particular, a disturbance-reactive (or disturbance feedback) controller in the form of a parameterized finite automaton is considered. We show that, for a class of automata, the robust positively invariant sets of the corresponding closed-loop systems can be expressed by a set of linear inequality constraints in the joint space of system states and controller parameters. This leads to an implicit representation of the invariant set in a lifted space. We further show how the same parameterization can be used to compute invariant sets when the disturbance is not available for measurement.
-
Fully autonomous vehicles need the ability to localize without external help, for instance by using visual sensors together with a pre-loaded map of landmarks. In this paper we connect self-localization using landmarks with coding theory. This connection enables to translate Hamming distance properties to probabilistic localization guarantees given a certain number of errors in landmark identification; it also enables to leverage existing polynomial time decoding algorithms for localization. We present promising numerical evaluation results by simulating vehicle traveling paths along a road network generated from real data of a region in Washington D.C.