skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Bostrom, Ann"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract The benefits of collaboration between the research and operational communities during the research-to-operations (R2O) process have long been documented in the scientific literature. Operational forecasters have a practiced, expert insight into weather analysis and forecasting but typically lack the time and resources for formal research and development. Conversely, many researchers have the resources, theoretical knowledge, and formal experience to solve complex meteorological challenges but lack an understanding of operation procedures, needs, requirements, and authority necessary to effectively bridge the R2O gap. Collaboration then serves as the most viable strategy to further a better understanding and improved prediction of atmospheric processes via ongoing multi-disciplinary knowledge transfer between the research and operational communities. However, existing R2O processes leave room for improvement when it comes to collaboration throughout a new product’s development cycle. This study assesses the subjective importance of collaboration at various stages of product development via a survey presented to participants of the 2021 Hazardous Weather Testbed Spring Forecasting Experiment. This feedback is then applied to create a proposed new R2O workflow that combines components from existing R2O procedures and modern co-production philosophies. 
    more » « less
    Free, publicly-accessible full text available May 19, 2026
  2. Free, publicly-accessible full text available May 1, 2026
  3. Abstract As an increasing number of machine learning (ML) products enter the research-to-operations (R2O) pipeline, researchers have anecdotally noted a perceived hesitancy by operational forecasters to adopt this relatively new technology. One explanation often cited in the literature is that this perceived hesitancy derives from the complex and opaque nature of ML methods. Because modern ML models are trained to solve tasks by optimizing a potentially complex combination of mathematical weights, thresholds, and nonlinear cost functions, it can be difficult to determine how these models reach a solution from their given input. However, it remains unclear to what degree a model’s transparency may influence a forecaster’s decision to use that model or if that impact differs between ML and more traditional (i.e., non-ML) methods. To address this question, a survey was offered to forecaster and researcher participants attending the 2021 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiment (SFE) with questions about how participants subjectively perceive and compare machine learning products to more traditionally derived products. Results from this study revealed few differences in how participants evaluated machine learning products compared to other types of guidance. However, comparing the responses between operational forecasters, researchers, and academics exposed notable differences in what factors the three groups considered to be most important for determining the operational success of a new forecast product. These results support the need for increased collaboration between the operational and research communities. Significance StatementParticipants of the 2021 Hazardous Weather Testbed Spring Forecasting Experiment were surveyed to assess how machine learning products are perceived and evaluated in operational settings. The results revealed little difference in how machine learning products are evaluated compared to more traditional methods but emphasized the need for explainable product behavior and comprehensive end-user training. 
    more » « less
    Free, publicly-accessible full text available March 1, 2026
  4. Abstract Artificial intelligence and machine learning (AI/ML) have attracted a great deal of attention from the atmospheric science community. The explosion of attention on AI/ML development carries implications for the operational community, prompting questions about how novel AI/ML advancements will translate from research into operations. However, the field lacks empirical evidence on how National Weather Service (NWS) forecasters, as key intended users, perceive AI/ML and its use in operational forecasting. This study addresses this crucial gap through structured interviews conducted with 29 NWS forecasters from October 2021 through July 2023 in which we explored their perceptions of AI/ML in forecasting. We found that forecasters generally prefer the term “machine learning” over “artificial intelligence” and that labeling a product as being AI/ML did not hurt perceptions of the products and made some forecasters more excited about the product. Forecasters also had a wide range of familiarity with AI/ML, and overall, they were (tentatively) open to the use of AI/ML in forecasting. We also provide examples of specific areas related to AI/ML that forecasters are excited or hopeful about and that they are concerned or worried about. One concern that was raised in several ways was that AI/ML could replace forecasters or remove them from the forecasting process. However, forecasters expressed a widespread and deep commitment to the best possible forecasts and services to uphold the agency mission using whatever tools or products that are available to assist them. Last, we note how forecasters’ perceptions evolved over the course of the study. 
    more » « less
    Free, publicly-accessible full text available November 1, 2025
  5. Kenawy, Ahmed (Ed.)
    The Federal Emergency Management Agency (FEMA) has long advocated for what it calls a “Whole Community approach” to disaster resilience and recovery. This philosophy holds that the priorities of all governmental, commercial, and interest groups should be considered, and their capabilities leveraged, in preparing for and responding to disasters. According to FEMA, federally recognized Tribal governments are part of the “Whole Community.” In this paper we use systematic content analysis techniques to examine policy documents derived from the Hazard Mitigation Assistance grant program to assess whether and how FEMA has taken the concrete policy steps necessary to include Tribal governments in the “Whole Community.” We find that while FEMA has expressed interest in a more equitable and accessible program that serves the needs of Tribal governments, it has taken few practical steps toward this goal. 
    more » « less
    Free, publicly-accessible full text available August 20, 2025
  6. Abstract Artificial Intelligence applications are rapidly expanding across weather, climate, and natural hazards. AI can be used to assist with forecasting weather and climate risks, including forecasting both the chance that a hazard will occur and the negative impacts from it, which means AI can help protect lives, property, and livelihoods on a global scale in our changing climate. To ensure that we are achieving this goal, the AI must be developed to be trustworthy, which is a complex and multifaceted undertaking. We present our work from the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), where we are taking a convergence research approach. Our work deeply integrates across AI, environmental, and risk communication sciences. This involves collaboration with professional end-users to investigate how they assess the trustworthiness and usefulness of AI methods for forecasting natural hazards. In turn, we use this knowledge to develop AI that is more trustworthy. We discuss how and why end-users may trust or distrust AI methods for multiple natural hazards, including winter weather, tropical cyclones, severe storms, and coastal oceanography. 
    more » « less
  7. On January 15, 2022, the Hunga-Tonga-Hunga-Ha'apai (Tonga) volcano erupted and triggered a tsunami forecasted to reach North America. This event provided a unique opportunity to investigate risk perception and communication among coastal emergency managers and emergency program coordinators (EMs). In response, this research explores 1) how risk can be communicated most effectively and 2) how risk perceptions associated with “distant” tsunami alerts and warnings affect EMs' willingness to issue emergency alerts. A purposive sample of coastal EMs (n = 21) in the U.S. Pacific Northwest participated in semi-structured interviews. Participants represented Tribal, county, state, and federal agencies in Washington, Oregon, and California. Interview transcripts were deductively coded and thematically analyzed. Participants perceived low risk from the Tonga tsunami but took precautionary measures and alerted the public. Participants described how their actions were driven by community characteristics and the anticipated reactions to messaging among residents. Many reported the need to balance notifying the public and avoiding the negative impacts of their messaging (e.g., “crying wolf,” panic, curiosity). The unique nature of the event led to identification of unanticipated facilitators and barriers to decision- making among participants. These findings can inform distant tsunami risk communication and preparedness for coastal communities. 
    more » « less
  8. By improving the prediction, understanding, and communication of powerful events in the atmosphere and ocean, artificial intelligence can revolutionize how communities respond to climate change. 
    more » « less
  9. We conducted mental model interviews in Aotearoa NZ to understand perspectives of uncertainty associated with natural hazards science. Such science contains many layers of interacting uncertainties, and varied understandings about what these are and where they come from creates communication challenges, impacting the trust in, and use of, science. To improve effective communication, it is thus crucial to understand the many diverse perspectives of scientific uncertainty.Participants included hazard scientists (e.g., geophysical, social, and other sciences), professionals with some scientific training (e.g., planners, policy analysts, emergency managers), and lay public participants with no advanced training in science (e.g., journalism, history, administration, art, or other domains). We present a comparative analysis of the mental model maps produced by participants, considering individuals’ levels of training and expertise in, and experience of, science.A qualitative comparison identified increasing map organization with science literacy, suggesting greater science training in, experience with, or expertise in, science results in a more organized and structured mental model of uncertainty. There were also language differences, with lay public participants focused more on perceptions of control and safety, while scientists focused on formal models of risk and likelihood.These findings are presented to enhance hazard, risk, and science communication. It is important to also identify ways to understand the tacit knowledge individuals already hold which may influence their interpretation of a message. The interview methodology we present here could also be adapted to understand different perspectives in participatory and co-development research. 
    more » « less
  10. This project developed a pre-interview survey, interview protocols, and materials for conducting interviews with expert users to better understand how they assess and make use decisions about new AI/ML guidance. Weather forecasters access and synthesize myriad sources of information when forecasting for high-impact, severe weather events. In recent years, artificial intelligence (AI) techniques have increasingly been used to produce new guidance tools with the goal of aiding weather forecasting, including for severe weather. For this study, we leveraged these advances to explore how National Weather Service (NWS) forecasters perceive the use of new AI guidance for forecasting severe hail and storm mode. We also specifically examine which guidance features are important for how forecasters assess the trustworthiness of new AI guidance. To this aim, we conducted online, structured interviews with NWS forecasters from across the Eastern, Central, and Southern Regions. The interviews covered the forecasters’ approaches and challenges for forecasting severe weather, perceptions of AI and its use in forecasting, and reactions to one of two experimental (i.e., non-operational) AI severe weather guidance: probability of severe hail or probability of storm mode. During the interview, the forecasters went through a self-guided review of different sets of information about the development (spin-up information, AI model technique, training of AI model, input information) and performance (verification metrics, interactive output, output comparison to operational guidance) of the presented guidance. The forecasters then assessed how the information influenced their perception of how trustworthy the guidance was and whether or not they would consider using it for forecasting. This project includes the pre-interview survey, survey data, interview protocols, and accompanying information boards used for the interviews. There is one set of interview materials in which AI/ML are mentioned throughout and another set where AI/ML were only mentioned at the end of the interviews. We did this to better understand how the label “AI/ML” did or did not affect how interviewees responded to interview questions and reviewed the information board. We also leverage think aloud methods with the information board, the instructions for which are included in the interview protocols. 
    more » « less