skip to main content


Title: NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)
Abstract We introduce the National Science Foundation (NSF) AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES). This AI institute was funded in 2020 as part of a new initiative from the NSF to advance foundational AI research across a wide variety of domains. To date AI2ES is the only NSF AI institute focusing on environmental science applications. Our institute focuses on developing trustworthy AI methods for weather, climate, and coastal hazards. The AI methods will revolutionize our understanding and prediction of high-impact atmospheric and ocean science phenomena and will be utilized by diverse, professional user groups to reduce risks to society. In addition, we are creating novel educational paths, including a new degree program at a community college serving underrepresented minorities, to improve workforce diversity for both AI and environmental science.  more » « less
Award ID(s):
2019758
NSF-PAR ID:
10422684
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Bulletin of the American Meteorological Society
Volume:
103
Issue:
7
ISSN:
0003-0007
Page Range / eLocation ID:
E1658 to E1668
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) focuses on creating trustworthy AI for a variety of environmental and Earth science phenomena. AI2ES includes leading experts from AI, atmospheric and ocean science, risk communication, and education, who work synergistically to develop and test trustworthy AI methods that transform our understanding and prediction of the environment. Trust is a social phenomenon, and our integration of risk communication research across AI2ES activities provides an empirical foundation for developing user‐informed, trustworthy AI. AI2ES also features activities to broaden participation and for workforce development that are fully integrated with AI2ES research on trustworthy AI, environmental science, and risk communication.

     
    more » « less
  2. Abstract

    Artificial Intelligence applications are rapidly expanding across weather, climate, and natural hazards. AI can be used to assist with forecasting weather and climate risks, including forecasting both the chance that a hazard will occur and the negative impacts from it, which means AI can help protect lives, property, and livelihoods on a global scale in our changing climate. To ensure that we are achieving this goal, the AI must be developed to be trustworthy, which is a complex and multifaceted undertaking. We present our work from the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), where we are taking a convergence research approach. Our work deeply integrates across AI, environmental, and risk communication sciences. This involves collaboration with professional end-users to investigate how they assess the trustworthiness and usefulness of AI methods for forecasting natural hazards. In turn, we use this knowledge to develop AI that is more trustworthy. We discuss how and why end-users may trust or distrust AI methods for multiple natural hazards, including winter weather, tropical cyclones, severe storms, and coastal oceanography.

     
    more » « less
  3. Abstract Many of our generation’s most pressing environmental science problems are wicked problems, which means they cannot be cleanly isolated and solved with a single ‘correct’ answer (e.g., Rittel 1973; Wirz 2021). The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) seeks to address such problems by developing synergistic approaches with a team of scientists from three disciplines: environmental science (including atmospheric, ocean, and other physical sciences), AI, and social science including risk communication. As part of our work, we developed a novel approach to summer school, held from June 27-30, 2022. The goal of this summer school was to teach a new generation of environmental scientists how to cross disciplines and develop approaches that integrate all three disciplinary perspectives and approaches in order to solve environmental science problems. In addition to a lecture series that focused on the synthesis of AI, environmental science, and risk communication, this year’s summer school included a unique Trust-a-thon component where participants gained hands-on experience applying both risk communication and explainable AI techniques to pre-trained ML models. We had 677 participants from 63 countries register and attend online. Lecture topics included trust and trustworthiness (Day 1), explainability and interpretability (Day 2), data and workflows (Day 3), and uncertainty quantification (Day 4). For the Trust-a-thon we developed challenge problems for three different application domains: (1) severe storms, (2) tropical cyclones, and (3) space weather. Each domain had associated user persona to guide user-centered development. 
    more » « less
  4. Abstract

    As artificial intelligence (AI) methods are increasingly used to develop new guidance intended for operational use by forecasters, it is critical to evaluate whether forecasters deem the guidance trustworthy. Past trust-related AI research suggests that certain attributes (e.g., understanding how the AI was trained, interactivity, and performance) contribute to users perceiving the AI as trustworthy. However, little research has been done to examine the role of these and other attributes for weather forecasters. In this study, we conducted 16 online interviews with National Weather Service (NWS) forecasters to examine (i) how they make guidance use decisions and (ii) how the AI model technique used, training, input variables, performance, and developers as well as interacting with the model output influenced their assessments of trustworthiness of new guidance. The interviews pertained to either a random forest model predicting the probability of severe hail or a 2D convolutional neural network model predicting the probability of storm mode. When taken as a whole, our findings illustrate how forecasters’ assessment of AI guidance trustworthiness is a process that occurs over time rather than automatically or at first introduction. We recommend developers center end users when creating new AI guidance tools, making end users integral to their thinking and efforts. This approach is essential for the development of useful andusedtools. The details of these findings can help AI developers understand how forecasters perceive AI guidance and inform AI development and refinement efforts.

    Significance Statement

    We used a mixed-methods quantitative and qualitative approach to understand how National Weather Service (NWS) forecasters 1) make guidance use decisions within their operational forecasting process and 2) assess the trustworthiness of prototype guidance developed using artificial intelligence (AI). When taken as a whole, our findings illustrate that forecasters’ assessment of AI guidance trustworthiness is a process that occurs over time rather than automatically and suggest that developers must center the end user when creating new AI guidance tools to ensure that the developed tools are useful andused.

     
    more » « less
  5. Abstract

    The National Science Foundation (NSF) Artificial Intelligence (AI) Institute for Edge Computing Leveraging Next Generation Networks (Athena) seeks to foment a transformation in modern edge computing by advancing AI foundations, computing paradigms, networked computing systems, and edge services and applications from a completely new computing perspective. Led by Duke University, Athena leverages revolutionary developments in computer systems, machine learning, networked computing systems, cyber‐physical systems, and sensing. Members of Athena form a multidisciplinary team from eight universities. Athena organizes its research activities under four interrelated thrusts supporting edge computing: Foundational AI, Computer Systems, Networked Computing Systems, and Services and Applications, which constitute an ambitious and comprehensive research agenda. The research tasks of Athena will focus on developing AI‐driven next‐generation technologies for edge computing and new algorithmic and practical foundations of AI and evaluating the research outcomes through a combination of analytical, experimental, and empirical instruments, especially with target use‐inspired research. The researchers of Athena demonstrate a cohesive effort by synergistically integrating the research outcomes from the four thrusts into three pillars: Edge Computing AI Systems, Collaborative Extended Reality (XR), and Situational Awareness and Autonomy. Athena is committed to a robust and comprehensive suite of educational and workforce development endeavors alongside its domestic and international collaboration and knowledge transfer efforts with external stakeholders that include both industry and community partnerships.

     
    more » « less