Pipeline networks are a crucial component of energy infrastructure, and natural force damage is an inevitable and unpredictable cause of pipeline failures. Such incidents can result in catastrophic losses, including harm to operators, communities, and the environment. Understanding the causes and impact of these failures is critical to preventing future incidents. This study investigates artificial intelligence (AI) algorithms to predict natural gas pipeline failures caused by natural forces, using climate change data that are incorporated into pipeline incident data. The AI algorithms were applied to the publicly available Pipeline and Hazardous Material Safety Administration (PHMSA) dataset from 2010 to 2022 for predicting future patterns. After data pre-processing and feature selection, the proposed model achieved a high prediction accuracy of 92.3% for natural gas pipeline damage caused by natural forces. The AI models can help identify high-risk pipelines and prioritize inspection and maintenance activities, leading to cost savings and improved safety. The predictive capabilities of the models can be leveraged by transportation agencies responsible for pipeline management to prevent pipeline damage, reduce environmental damage, and effectively allocate resources. This study highlights the potential of machine learning techniques in predicting pipeline damage caused by natural forces and underscores the need for further research to enhance our understanding of the complex interactions between climate change and pipeline infrastructure monitoring and maintenance.
more »
« less
Three Paradoxes to Reconcile to Promote Safe, Fair, and Trustworthy AI in Education
Incorporating recordings of teacher-student conversations into the training of LLMs has the potential to improve AI tools. Although AI developers are encouraged to put "humans in the loop" of their AI safety protocols, educators do not typically drive the data collection or design and development processes underpinning new technologies. To gather insight into privacy concerns, the adequacy of safety procedures, and potential benefits of recording and aggregating data at scale to inform more intelligent tutors, we interviewed a pilot sample of teachers and administrators using a scenario-based, semi-structured interview protocol. Our preliminary findings reveal three "paradoxes" for the field to resolve to promote safe, fair, and trustworthy AI. We conclude with recommendations for education stakeholders to reconcile these paradoxes and advance the science of learning.
more »
« less
- Award ID(s):
- 2321499
- PAR ID:
- 10539200
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400706332
- Page Range / eLocation ID:
- 295 to 299
- Format(s):
- Medium: X
- Location:
- Atlanta GA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model’s safety issues and for developing potential defensive solutions against adversarial attacks.more » « less
-
Integrating artificial intelligence (AI) technologies into law enforcement has become a concern of contemporary politics and public discourse. In this paper, we qualitatively examine the perspectives of AI technologies based on 20 semi-structured interviews of law enforcement professionals in North Carolina. We investigate how integrating AI technologies, such as predictive policing and autonomous vehicle (AV) technology, impacts the relationships between communities and police jurisdictions. The evidence suggests that police officers maintain that AI plays a limited role in policing but believe the technologies will continue to expand, improving public safety and increasing policing capability. Conversely, police officers believe that AI will not necessarily increase trust between police and the community, citing ethical concerns and the potential to infringe on civil rights. It is thus argued that the trends toward integrating AI technologies into law enforcement are not without risk. Policymaking guided by public consensus and collaborative discussion with law enforcement professionals must aim to promote accountability through the application of responsible design of AI in policing with an end state of providing societal benefits and mitigating harm to the populace. Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement.more » « less
-
Material characterization techniques are widely used to characterize the physical and chemical properties of materials at the nanoscale and, thus, play central roles in material scientific discoveries. However, the large and complex datasets generated by these techniques often require significant human effort to interpret and extract meaningful physicochemical insights. Artificial intelligence (AI) techniques such as machine learning (ML) have the potential to improve the efficiency and accuracy of surface analysis by automating data analysis and interpretation. In this perspective paper, we review the current role of AI in surface analysis and discuss its future potential to accelerate discoveries in surface science, materials science, and interface science. We highlight several applications where AI has already been used to analyze surface analysis data, including the identification of crystal structures from XRD data, analysis of XPS spectra for surface composition, and the interpretation of TEM and SEM images for particle morphology and size. We also discuss the challenges and opportunities associated with the integration of AI into surface analysis workflows. These include the need for large and diverse datasets for training ML models, the importance of feature selection and representation, and the potential for ML to enable new insights and discoveries by identifying patterns and relationships in complex datasets. Most importantly, AI analyzed data must not just find the best mathematical description of the data, but it must find the most physical and chemically meaningful results. In addition, the need for reproducibility in scientific research has become increasingly important in recent years. The advancement of AI, including both conventional and the increasing popular deep learning, is showing promise in addressing those challenges by enabling the execution and verification of scientific progress. By training models on large experimental datasets and providing automated analysis and data interpretation, AI can help to ensure that scientific results are reproducible and reliable. Although integration of knowledge and AI models must be considered for the transparency and interpretability of models, the incorporation of AI into the data collection and processing workflow will significantly enhance the efficiency and accuracy of various surface analysis techniques and deepen our understanding at an accelerated pace.more » « less
-
Ensuring that AI systems do what we, as humans, actually want them to do, is one of the biggest open research challenges in AI alignment and safety. My research seeks to directly address this challenge by enabling AI systems to interact with humans to learn aligned and robust behaviors. The way in which robots and other AI systems behave is often the result of optimizing a reward function. However, manually designing good reward functions is highly challenging and error prone, even for domain experts. Consider trying to write down a reward function that describes good driving behavior or how you like your bed made in the morning. While reward functions for these tasks are difficult to manually specify, human feedback in the form of demonstrations or preferences are often much easier to obtain. However, human data is often difficult to interpret, due to ambiguity and noise. Thus, it is critical that AI systems take into account epistemic uncertainty over the human's true intent. My talk will give an overview of my lab's progress along the following fundamental research areas: (1) efficiently maintaining uncertainty over human intent, (2) directly optimizing behavior to be robust to uncertainty over human intent, and (3) actively querying for additional human input to reduce uncertainty over human intent.more » « less
An official website of the United States government

