ABSTRACT Artificial Intelligence (AI) methods are valued for their ability to predict outcomes from dynamically complex data. Despite this virtue, AI is widely criticized as a “black box” i.e., lacking mechanistic explanations to accompany predictions. We introduce a novel interdisciplinary approach that balances the predictive power of data-driven methods with theory-driven explanatory power by presenting a shared use case from four disciplinary perspectives. The use case examines scientific career trajectories through temporally complex, heterogeneous bibliographic big data. Topics addressed include: data representation in complex problems, trade-offs between theoretical, hypothesis driven, and data-driven approaches, AI trustworthiness, model fairness, algorithm explainability and AI adoption/usability. Panelists and audience members will be prompted to discuss the value of approach presented versus other ways to address the challenges raised by the panel, and to consider their limitations and remaining challenges. 
                        more » 
                        « less   
                    
                            
                            ADOPTION OF ARTIFICIAL INTELLIGENCE BY ELECTRIC UTILITIES
                        
                    
    
            Adopting Artificial Intelligence (AI) in electric utilities signifies vast, yet largely untapped potential for accelerating a clean energy transition. This requires tackling complex challenges such as trustworthiness, explainability, pri- vacy, cybersecurity, and governance, balancing these against AI’s benefits. This article aims to facilitate dialogue among regulators, policymakers, utilities, and other stakeholders on navigating these complex issues, fostering a shared under- standing and approach to leveraging AI’s transformative power responsibly. The complex interplay of state and federal regulations necessitates careful coordina- tion, particularly as AI impacts energy markets and national security. Promoting data sharing with privacy and cybersecurity in mind is critical. The article advo- cates for ‘realistic open benchmarks’ to foster innovation without compromising confidentiality. Trustworthiness (the system’s ability to ensure reliability and per- formance, and to inspire confidence and transparency) and explainability (ensur- ing that AI decisions are understandable and accessible to a large diversity of par- ticipants) are fundamental for AI acceptance, necessitating transparent, accountable, and reliable systems. AI must be deployed in a way that helps keep the lights on. As AI becomes more involved in decision-making, we need to think about who’s responsible and what’s ethical. With the current state of the art, using generative AI for critical, near real-time decision-making should be approached carefully. While AI is advancing rapidly both in terms of technology and regula- tion, within and beyond the scope of energy specific applications, this article aims to provide timely insights and a common understanding of AI, its opportunities and challenges for electric utility use cases, and ultimately help advance its adop- tion in the power system sector, to accelerate the equitable clean energy transition. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2133284
- PAR ID:
- 10544704
- Editor(s):
- Reiter, Harvey L
- Publisher / Repository:
- Energy Bar Association
- Date Published:
- Journal Name:
- Energy law journal
- Volume:
- 45
- Issue:
- 1
- ISSN:
- 0270-9163
- Subject(s) / Keyword(s):
- Energy AI Privacy
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Despite AI’s significant growth, its “black box” nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in high-risk applications. Explainable AI (XAI) has emerged to help with this problem. Designing effectively fast and accurate XAI is still challenging, especially in numerical applications. We propose a novel XAI model named Transparency Relying Upon Statistical Theory (TRUST) for XAI. TRUST XAI models the statistical behavior of the underlying AI’s outputs. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information to rank these parameters and pick only the most influential ones on the AI’s outputs and call them “representatives” of the classes. Then we use multi-model Gaussian distributions to determine the likelihood of any new sample belonging to each class. The proposed technique is a surrogate model that is not dependent on the type of the underlying AI. TRUST is suitable for any numerical application. Here, we use cybersecurity of the industrial internet of things (IIoT) as an example application. We analyze the performance of the model using three different cybersecurity datasets, including “WUSTLIIoT”, “NSL-KDD”, and “UNSW”. We also show how TRUST is explained to the user. The TRUST XAI provides explanations for new random samples with an average success rate of 98%. Also, the advantages of our model over another popular XAI model, LIME, including performance, speed, and the method of explainability are evaluated.more » « less
- 
            As cyber threats grow in both frequency and sophistication, traditional cybersecurity measures struggle to keep pace with evolving attack methods. Artificial Intelligence (AI) has emerged as a powerful tool for enhancing threat detection, prevention, and response. AI-driven security systems offer the ability to analyze vast amounts of data in real-time, recognize subtle patterns indicative of cyber threats, and adapt to new attack strategies more efficiently than conventional approaches. However, despite AI’s potential, challenges remain regarding its effectiveness, ethical implications, and risks of adversarial manipulation. This research investigates the strengths and limitations of AI-driven cybersecurity by comparing AI-based security tools with traditional methods, identifying key advantages and vulnerabilities, and exploring ethical considerations. Additionally, a survey of cybersecurity professionals was conducted to assess expert opinions on AI’s role, effectiveness, and potential risks. By combining these insights with experimental testing and a comprehensive review of existing literature, this study provides a nuanced understanding of AI’s impact on cybersecurity and offers recommendations for optimizing its integration into modern security infrastructures.more » « less
- 
            null (Ed.)Cybersecurity has rapidly emerged as a grand societal challenge of the 21st century. Innovative solutions to proactively tackle emerging cybersecurity challenges are essential to ensuring a safe and secure society. Artificial Intelligence (AI) has rapidly emerged as a viable approach for sifting through terabytes of heterogeneous cybersecurity data to execute fundamental cybersecurity tasks, such as asset prioritization, control allocation, vulnerability management, and threat detection, with unprecedented efficiency and effectiveness. Despite its initial promise, AI and cybersecurity have been traditionally siloed disciplines that relied on disparate knowledge and methodologies. Consequently, the AI for Cybersecurity discipline is in its nascency. In this article, we aim to provide an important step to progress the AI for Cybersecurity discipline. We first provide an overview of prevailing cybersecurity data, summarize extant AI for Cybersecurity application areas, and identify key limitations in the prevailing landscape. Based on these key issues, we offer a multi-disciplinary AI for Cybersecurity roadmap that centers on major themes such as cybersecurity applications and data, advanced AI methodologies for cybersecurity, and AI-enabled decision making. To help scholars and practitioners make significant headway in tackling these grand AI for Cybersecurity issues, we summarize promising funding mechanisms from the National Science Foundation (NSF) that can support long-term, systematic research programs. We conclude this article with an introduction of the articles included in this special issue.more » « less
- 
            AI-assisted decision making becomes increasingly prevalent, yet individuals often fail to utilize AI-based decision aids appropriately especially when the AI explanations are absent, potentially as they do not reflect on AI’s decision recommendations critically. Large language models (LLMs), with their exceptional conversational and analytical capabilities, present great opportunities to enhance AI-assisted decision making in the absence of AI explanations by providing natural-language-based analysis of AI’s decision recommendation, e.g., how each feature of a decision making task might contribute to the AI recommendation. In this paper, via a randomized experiment, we first show that presenting LLM-powered analysis of each task feature, either sequentially or concurrently, does not significantly improve people’s AI-assisted decision performance. To enable decision makers to better leverage LLM-powered analysis, we then propose an algorithmic framework to characterize the effects of LLM-powered analysis on human decisions and dynamically decide which analysis to present. Our evaluation with human subjects shows that this approach effectively improves decision makers’ appropriate reliance on AI in AI-assisted decision making.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    