skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Information avoidance and overvaluation under epistemic constraints: Principles and implications for regulatory policies
The Value of Information (VoI) assesses the impact of data in a decision process. A risk-neutral agent, quantifying the VoI in monetary terms, prefers to collect data only if their VoI surpasses the cost to collect them. For an agent acting without external constraints, data have non-negative VoI (as free “information cannot hurt”) and those with an almost-negligible potential effect on the agent's belief have an almost-negligible VoI. However, these intuitive properties do not hold true for an agent acting under external constraints related to epistemic quantities, such as those posed by some regulations. For example, a manager forced to repair an asset when its probability of failure is too high can prefer to avoid collecting free information about the actual condition of the asset, and even to pay in order to avoid this, or she can assign a high VoI to almost-irrelevant data. Hence, by enforcing epistemic constraints in the regulations, the policy-maker can induce a range of counter-intuitive, but rational, behaviors, from information avoidance to over-evaluation of barely relevant information, in the agents obeying the regulations. This paper illustrates how the structural properties of VoI change depending on such external epistemic constraints, and discusses how incentives and penalties can alleviate these induced attitudes toward information.  more » « less
Award ID(s):
1638327
PAR ID:
10161384
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Reliability engineering systems safety
Volume:
197
ISSN:
0951-8320
Page Range / eLocation ID:
106814
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The value of information (VoI) provides a rational metric to assess the impact of data in decision processes, including maintenance of engineering systems. According to the principle that “information never hurts”, VoI is guaranteed to be non-negative when a single agent aims at minimizing an expected cost. However, in other contexts such as non-cooperative games, where agents compete against each other, revealing a piece of information to all agents may have a negative impact to some of them, as the negative effect of the competitors being informed and adjusting their policies surpasses the direct VoI. Being aware of this, some agents prefer to avoid having certain information collected, when it must be shared with others, as the overall VoI is negative for them. A similar result may occur for managers of infrastructure assets following the prescriptions of codes and regulations. Modern codes require the probability of some failure events be below a threshold, so managers are forced to retrofit assets if that probability is too high. If the economic incentive of those agents disagrees with the code requirements, the VoI associated with tests or inspections may be negative. In this paper, we investigate under what circumstance this happens, and how severe the effects of this issue can be. 
    more » « less
  2. We assess the Value of Information (VoI) for inspecting components in systems managed by multiple agents, using game theory and Nash equilibrium analysis. We focus on binary systems made up by binary components which can be either intact or damaged. Agents taking maintenance actions are responsible for the repair costs of their own components, and the penalty for system failure is shared among all agents. The precision of inspection is also considered, and we identify the prior and posterior Nash equilibrium with perfect or imperfect inspections. The VoI is assessed for the individual agents as well as for the whole set of agents, and the analysis consider series, parallel and general systems. A negative VoI can trigger the phenomenon of Information Avoidance (IA), where rational agents prefer not to collect free information. We discuss whether it is possible that the VoI is negative for one or for all agents, for the agents with inspected or uninspected components, and for the total sum of VoIs. 
    more » « less
  3. A basic assumption of traditional reinforcement learning is that the value of a reward does not change once it is received by an agent. The present work forgoes this assumption and considers the situation where the value of a reward decays proportionally to the time elapsed since it was obtained. Emphasizing the inflection point occurring at the time of payment, we use the term asset to refer to a reward that is currently in the possession of an agent. Adopting this language, we initiate the study of depreciating assets within the framework of infinite-horizon quantitative optimization. In particular, we propose a notion of asset depreciation, inspired by classical exponential discounting, where the value of an asset is scaled by a fixed discount factor at each time step after it is obtained by the agent. We formulate an equational characterization of optimality in this context, establish that optimal values and policies can be computed efficiently, and develop a model-free reinforcement learning approach to obtain optimal policies. 
    more » « less
  4. Agent-based crawlers are commonly used in network maintenance and information gathering. In order not to disturb the main functionality of the system, whether acting at nodes or being in transit, they need to operate online, perform a single operation fast and use small memory. They should also be preferably deterministic, as crawling agents have limited capabilities of generating a large number of truly random bits. We consider a system in which an agent receives an update, typically an insertion or deletion, of some information upon visiting a node. On request, the agent needs to output hot information, i.e., with the net occurrence above certain frequency threshold. A desired time and memory complexity of such agent should be poly-logarithmic in the number of visited nodes and inversely proportional to the frequency threshold. Ours is the first such agent with rigorous analysis and a complementary almost-matching lower bound.

     
    more » « less
  5. Understanding performance data, and more specifically memory access pattern is essential in optimizing scientific applications. Among the various factors affecting performance, such as the hardware architecture, the algorithms, or the system software stack, performance is also often related to the applications' physics. While there exists a number of techniques to collect relevant performance metrics, such as number of cache misses, traditional tools almost exclusively present this data relative to the code or as abstract tuples. This can obscure the data dependent nature of performance bottlenecks and make root-cause analysis difficult. Here we take advantage of the fact that a large class of applications are defined over some domain discretized by a mesh. By projecting the performance data directly onto these meshes, we enable developers to explore the performance data in the context of their application resulting in more intuitive visualizations. We introduce a lightweight, general interface to couple a performance visualization tool, MemAxes, to an external visualization tool, VisIt. This allows us to harness the advanced analytic capabilities of MemAxes to drive the exploration while exploiting the capabilities of VisIt to visualize both application and performance data in the application domain. 
    more » « less