skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stress tests and information disclosure: An experimental analysis
To improve the stability of the banking system the Dodd-Frank Act mandates that central banks conduct periodic evaluations of banks’ financial conditions. An intensely debated aspect of these ‘stress tests’ re- gards how much of that information generated by stress tests should be disclosed to financial markets. This paper uses an environment constructed from a model by Goldstein and Leitner (2018) to gain some behavioral insight into the policy tradeoffs associated with disclosure. Experimental results indicate that variations in disclosure conditions are sensitive to overbidding for bank assets. Absent overbidding, how- ever, optimal disclosure robustly improves risk sharing even when banks behave non-optimally.  more » « less
Award ID(s):
1949112
PAR ID:
10523464
Author(s) / Creator(s):
; ; ;
Editor(s):
Kirchler, Michael; Weitzel, Utz
Publisher / Repository:
Elesvier
Date Published:
Journal Name:
Journal of Banking & Finance
Volume:
154
Issue:
C
ISSN:
0378-4266
Page Range / eLocation ID:
106691
Subject(s) / Keyword(s):
Optimal disclosure Stress tests Bank regulation Laboratory experiments
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unstructured data, especially text, continues to grow rapidly in various domains. In particular, in the financial sphere, there is a wealth of accumulated unstructured financial data, such as the textual disclosure documents that companies submit on a regular basis to regulatory agencies, such as the Securities and Exchange Commission (SEC). These documents are typically very long and tend to contain valuable soft information about a company's performance. It is therefore of great interest to learn predictive models from these long textual documents, especially for forecasting numerical key performance indicators (KPIs). Whereas there has been a great progress in pre-trained language models (LMs) that learn from tremendously large corpora of textual data, they still struggle in terms of effective representations for long documents. Our work fills this critical need, namely how to develop better models to extract useful information from long textual documents and learn effective features that can leverage the soft financial and risk information for text regression (prediction) tasks. In this paper, we propose and implement a deep learning framework that splits long documents into chunks and utilizes pre-trained LMs to process and aggregate the chunks into vector representations, followed by self-attention to extract valuable document-level features. We evaluate our model on a collection of 10-K public disclosure reports from US banks, and another dataset of reports submitted by US companies. Overall, our framework outperforms strong baseline methods for textual modeling as well as a baseline regression model using only numerical data. Our work provides better insights into how utilizing pre-trained domain-specific and fine-tuned long-input LMs in representing long documents can improve the quality of representation of textual data, and therefore, help in improving predictive analyses. 
    more » « less
  2. With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context. 
    more » « less
  3. In the financial sphere, there is a wealth of accumulated unstructured financial data, such as the textual disclosure documents that companies submit on a regular basis to regulatory agencies, such as the Securities and Exchange Commission (SEC). These documents are typically very long and tend to contain valuable soft information about a company’s performance that is not present in quantitative predictors. It is therefore of great interest to learn predictive models from these long textual documents, especially for forecasting numerical key performance indicators (KPIs). In recent years, there has been a great progress in natural language processing via pre-trained language models (LMs) learned from large corpora of textual data. This prompts the important question of whether they can be used effectively to produce representations for long documents, as well as how we can evaluate the effectiveness of representations produced by various LMs. Our work focuses on answering this critical question, namely the evaluation of the efficacy of various LMs in extracting useful soft information from long textual documents for prediction tasks. In this paper, we propose and implement a deep learning evaluation framework that utilizes a sequential chunking approach combined with an attention mechanism. We perform an extensive set of experiments on a collection of 10-K reports submitted annually by US banks, and another dataset of reports submitted by US companies, in order to investigate thoroughly the performance of different types of language models. Overall, our framework using LMs outperforms strong baseline methods for textual modeling as well as for numerical regression. Our work provides better insights into how utilizing pre-trained domain-specific and fine-tuned long-input LMs for representing long documents can improve the quality of representation of textual data, and therefore, help in improving predictive analyses. 
    more » « less
  4. This paper investigates the dynamics of systemic risk in banking networks by analyzing equilibrium points and stability conditions. The focus is on a model that incorporates interactions among distressed and undistressed banks. The equilibrium points are determined by solving a reduced system of equations, considering both homogeneous and heterogeneous scenarios. Local and global stability analyses reveal conditions under which equilibrium points are stable or unstable. Numerical simulations further illustrate the dynamics of systemic risk, while the theoretical findings offer insights into the behavior of distressed banks under varying conditions. Overall, the model enhances our understanding of systemic financial risk and offers valuable insights for risk management and policymaking in the banking sector. 
    more » « less
  5. null (Ed.)
    We provide an overview of the relationship between financial networks and systemic risk. We present a taxonomy of different types of systemic risk, differentiating between direct externalities between financial organizations (e.g., defaults, correlated portfolios, fire sales), and perceptions and feedback effects (e.g., bank runs, credit freezes). We also discuss optimal regulation and bailouts, measurements of systemic risk and financial centrality, choices by banks regarding their portfolios and partnerships, and the changing nature of financial networks. 
    more » « less