Abstract The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) focuses on creating trustworthy AI for a variety of environmental and Earth science phenomena. AI2ES includes leading experts from AI, atmospheric and ocean science, risk communication, and education, who work synergistically to develop and test trustworthy AI methods that transform our understanding and prediction of the environment. Trust is a social phenomenon, and our integration of risk communication research across AI2ES activities provides an empirical foundation for developing user‐informed, trustworthy AI. AI2ES also features activities to broaden participation and for workforce development that are fully integrated with AI2ES research on trustworthy AI, environmental science, and risk communication.
more »
« less
This content will become publicly available on January 1, 2026
Artificial Intelligence Literacy for Ocean Professionals is Needed for a Sustainable Future
Although not a new phenomenon, interest in artificial intelligence (AI)-based tools has exploded recently. This rapid technological development expands the use of machines beyond prediction and summarization to generating content that closely resembles human work. As a result, applications of AI are expanding rapidly and becoming integrated into various sectors of society (Stevens et al., 2021).
more »
« less
- Award ID(s):
- 2318309
- PAR ID:
- 10599689
- Publisher / Repository:
- The Oceanography Society
- Date Published:
- Journal Name:
- Oceanography
- ISSN:
- 1042-8275
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Artificial intelligence (AI) can be used to improve performance across a wide range of Earth system prediction tasks. As with any application of AI, it is important for AI to be developed in an ethical and responsible manner to minimize bias and other effects. In this work, we extend our previous work demonstrating how AI can go wrong with weather and climate applications by presenting a categorization of bias for AI in the Earth sciences. This categorization can assist AI developers to identify potential biases that can affect their model throughout the AI development life cycle. We highlight examples from a variety of Earth system prediction tasks of each category of bias.more » « less
-
Abstract Artificial Intelligence applications are rapidly expanding across weather, climate, and natural hazards. AI can be used to assist with forecasting weather and climate risks, including forecasting both the chance that a hazard will occur and the negative impacts from it, which means AI can help protect lives, property, and livelihoods on a global scale in our changing climate. To ensure that we are achieving this goal, the AI must be developed to be trustworthy, which is a complex and multifaceted undertaking. We present our work from the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), where we are taking a convergence research approach. Our work deeply integrates across AI, environmental, and risk communication sciences. This involves collaboration with professional end-users to investigate how they assess the trustworthiness and usefulness of AI methods for forecasting natural hazards. In turn, we use this knowledge to develop AI that is more trustworthy. We discuss how and why end-users may trust or distrust AI methods for multiple natural hazards, including winter weather, tropical cyclones, severe storms, and coastal oceanography.more » « less
-
Abstract As AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing equivalent writing and illustrating tasks. Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. Emissions analyses do not account for social impacts such as professional displacement, legality, and rebound effects. In addition, AI is not a substitute for all human tasks. Nevertheless, at present, the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.more » « less
-
Abstract The autoinducer‐2 (AI‐2) quorum sensing system is involved in a range of population‐based bacterial behaviors and has been engineered for cell–cell communication in synthetic biology systems. Investigation into the cellular mechanisms of AI‐2 processing has determined that overexpression of uptake genes increases AI‐2 uptake rate, and genomic deletions of degradation genes lowers the AI‐2 level required for activation of reporter genes. Here, we combine these two strategies to engineer anEscherichia colistrain with enhanced ability to detect and respond to AI‐2. In anE. colistrain that does not produce AI‐2, we monitored AI‐2 uptake and reporter protein expression in a strain that overproduced the AI‐2 uptake or phosphorylation units LsrACDB or LsrK, a strain with the deletion of AI‐2 degradation units LsrF and LsrG, and an “enhanced” strain with both overproduction of AI‐2 uptake and deletion of AI‐2 degradation elements. By adding up to 40 μM AI‐2 to growing cell cultures, we determine that this “enhanced” AI‐2 sensitive strain both uptakes AI‐2 more rapidly and responds with increased reporter protein expression than the others. This work expands the toolbox for manipulating AI‐2 quorum sensing processes both in native environments and for synthetic biology applications.more » « less
An official website of the United States government
