skip to main content


Title: Embracing and Controlling Risk Dependency in Cyber Insurance Policy Underwriting
This paper highlights how cyber risk dependencies can be taken into consideration when underwrit- ing cyber-insurance policies. This is done within the context of a base rate insurance policy framework, which is widely used in practice. Specifically, we show that there is an opportunity for an underwriter to better control the risk dependency and the risk spill-over, ultimately resulting in lower overall cyber risks across its portfolio. To do so, we consider a Service Provider (SP) and its customers as the interdependent insurer’s customers: a data breach suffered by the SP can cause business interruption to its customers. In underwriting both the SP and its customers, we show that the insurer can increase its profit by incentivizing the SP (through a discount on its premium) to invest more in security, thereby decreasing the chance of business interruption to the customers and increasing social welfare. For comparison, we also consider a scenario where the insurer underwrites only the SP’s customers (but not the SP), and receives compensation from the SP’s insurance carrier when losses are attributed to the SP. We show that the insurer cannot outperform the case where it underwrites both the SP and its customers. We use an actual cyber-insurance policy and claims data to calibrate and substantiate our analytical findings.  more » « less
Award ID(s):
1739295
NSF-PAR ID:
10076419
Author(s) / Creator(s):
;
Date Published:
Journal Name:
The Annual Workshop on the Economics of Information Security (WEIS)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study the problem of designing cyber insurance policies in an interdependent network, where the loss of one agent (a primary party) depends not only on his own effort, but also on the investments and efforts of others (third parties) in the same eco-system (i.e., externalities). In designing cyber insurance policies, the conventional wisdom is to avoid insuring dependent parties for two reasons. First, simultaneous loss incidents threaten the insurer's business and capital. Second, when a loss incident can be attributed to a third party, the insurer of the primary party can get compensation from the insurer of the third party in order to reduce its own risk exposure. In this work, we analyze an interdependent network model in order to understand whether an insurer should avoid or embrace risks interdependencies. We focus on two interdependent agents, where the risk of one agent (primary party) depends on the other agent (third party), but not the other way around. We consider two potential scenarios: one in which an insurer only insures a primary party, and another one in which the insurer of the primary party further insures the third party agent. We show that it is in fact profitable for the primary party's insurer to insure both agents. Further, we show that insuring both agents not only provides higher profit for the insurer, but also reduces the collective risk. 
    more » « less
  2. null (Ed.)
    Cyber insurance like other types of insurance is a method of risk transfer, where the insured pays a premium in exchange for coverage in the event of a loss. As a result of the reduced risk for the insured and the lack of information on the insurer’s side, the insured is generally inclined to lower its effort, leading to a worse state of security, a common phenomenon known as moral hazard. To mitigate moral hazard, a widely employed concept is premium discrimination, i.e., an agent/insured who exerts higher effort pays less premium. This, however, relies on the insurer’s ability to assess the effort exerted by the insured. In this paper, we study two methods of premium discrimination that rely on two different types of assessment: pre-screening and post-screening. Pre-screening occurs before the insured enters into a contract and can be done at the beginning of each contract period; the result of this process gives the insurer an estimated risk on the insured, which then determines the contract terms. The post-screening mechanism involves at least two contract periods whereby the second-period premium is increased if a loss event occurs during the first period. Prior work shows that both pre-screening and post-screening are generally effective in mitigating moral hazard and increasing the insured’s effort. The analysis in this study shows, however, that the conclusion becomes more nuanced when loss events are rare. Specifically, we show that post-screening is not effective at all with rare losses, while pre-screening can be an effective method when the agent perceives them as rarer than the insurer does; in this case pre-screening improves both the agent’s effort level and the insurer’s profit. 
    more » « less
  3. Abstract

    Networks like those of healthcare infrastructure have been a primary target of cyberattacks for over a decade. From just a single cyberattack, a healthcare facility would expect to see millions of dollars in losses from legal fines, business interruption, and loss of revenue. As more medical devices become interconnected, more cyber vulnerabilities emerge, resulting in more potential exploitation that may disrupt patient care and give rise to catastrophic financial losses. In this paper, we propose a structural model of an aggregate loss distribution across multiple cyberattacks on a prototypical hospital network. Modeled as a mixed random graph, the hospital network consists of various patient‐monitoring devices and medical imaging equipment as random nodes to account for the variable occupancy of patient rooms and availability of imaging equipment that are connected by bidirectional edges to fixed hospital and radiological information systems. Our framework accounts for the documented cyber vulnerabilities of a hospital's trusted internal network of its major medical assets. To our knowledge, there exist no other models of an aggregate loss distribution for cyber risk in this setting. We contextualize the problem in the probabilistic graph‐theoretical framework using a percolation model and combinatorial techniques to compute the mean and variance of the loss distribution for a mixed random network with associated random costs that can be useful for healthcare administrators and cybersecurity professionals to improve cybersecurity management strategies. By characterizing this distribution, we allow for the further utility of pricing cyber risk.

     
    more » « less
  4. Data privacy requirements are a complex and quickly evolving part of the data management domain. Especially in Healthcare (e.g., United States Health Insurance Portability and Accountability Act and Veterans Affairs requirements), there has been a strong emphasis on data privacy and protection. Data storage is governed by multiple sources of policy requirements, including internal policies and legal requirements imposed by external governing organizations. Within a database, a single value can be subject to multiple requirements on how long it must be preserved and when it must be irrecoverably destroyed. This often results in a complex set of overlapping and potentially conflicting policies. Existing storage systems are lacking sufficient support functionality for these critical and evolving rules, making compliance an underdeveloped aspect of data management. As a result, many organizations must implement manual ad-hoc solutions to ensure compliance. As long as organizations depend on manual approaches, there is an increased risk of non-compliance and threat to customer data privacy. In this paper, we detail and implement an automated comprehensive data management compliance framework facilitating retention and purging compliance within a database management system. This framework can be integrated into existing databases without requiring changes to existing business processes. Our proposed implementation uses SQL to set policies and automate compliance. We validate this framework on a Postgres database, and measure the factors that contribute to our reasonable performance overhead (13% in a simulated real-world workload). 
    more » « less
  5. Internet users have suffered collateral damage in tussles over paid peering between large ISPs and large content providers. Paid peering is a relationship where two networks exchange traffic with payment, which provides direct access to each other’s customers without having to pay a third party to carry that traffic for them. The issue will arise again when the United States Federal Communications Commission (FCC) considers a new net neutrality order. We first consider the effect of paid peering on broadband prices. We adopt a two-sided market model in which an ISP maximizes profit by setting broadband prices and a paid peering price. We analytically derive the profit-maximizing prices, and show that they satisfy a generalization of the well-known Lerner rule. Our result shows that paid peering fees reduce the premium plan price, increase the video streaming price and the total price for premium tier customers who subscribe to video streaming services; however, the ISP passes on to its customers only a portion of the revenue from paid peering. ISP profit increases but video streaming profit decreases as an ISP moves from settlement-free peering to paid peering price. We next consider the effect of paid peering on consumer surplus. We find that consumer surplus is a uni-modal function of the paid peering fee. The paid peering fee that maximizes consumer surplus depends on elasticities of demand for broadband and for video streaming. However, consumer surplus is maximized when paid peering fees are significantly lower than those that maximize ISP profit. However, it does not follow that settlement-free peering is always the policy that maximizes consumer surplus. The peering price depends critically on the incremental ISP cost per video streaming subscriber; at different costs, it can be negative, zero, or positive. 
    more » « less