The Battery Management System (BMS) plays a crucial role in modern energy storage technologies, ensuring battery safety, performance, and longevity. However, as the BMS becomes more sophisticated and interconnected, it faces increasing cybersecurity challenges that can lead to catastrophic failures and safety hazards. This paper provides a comprehensive overview of cyberattacks targeting both traditional and wireless BMS. It explores various attack vectors, including malware injection, electromagnetic interference (EMI), temperature sensing manipulation, sensor malfunctioning and fault injection, and jamming attacks on modern BMS. Through threat modeling and vulnerability analysis, this paper examines the potential impacts on BMS functionality, safety, and performance. We highlight vulnerabilities associated with different BMS architectures and components, emphasizing the need for robust cybersecurity measures to protect against emerging threats. Cybersecurity measures are essential to protect the system from potential threats that could trigger false alarms, cause malfunctions, or lead to dangerous failures. Unauthorized access or tampering with the BMS can disrupt its fault response mechanisms, jeopardizing system performance and associated resources. Key cybersecurity strategies include intrusion detection systems (IDS), crypto-based authentication, secure firmware updates, and hardware-based security mechanisms such as trusted platform modules (TPMs). These measures strengthen BMS resilience by preventing unauthorized access and ensuring data integrity. Our findings are essential for mitigating risks in various sectors, including electric vehicles (EVs), renewable energy, and grid storage. They underscore the importance of ongoing research and development of adaptive security strategies to safeguard BMS against evolving cyber threats. Additionally, we propose a trust mechanism that secures the connection between input sensors and the BMS, ensuring the reliability and safety of battery-powered systems across various industries.
more »
« less
This content will become publicly available on March 29, 2026
AI-Driven Cybersecurity: Opportunities, Challenges, and the Future of Human-AI Collaboration
As cyber threats grow in both frequency and sophistication, traditional cybersecurity measures struggle to keep pace with evolving attack methods. Artificial Intelligence (AI) has emerged as a powerful tool for enhancing threat detection, prevention, and response. AI-driven security systems offer the ability to analyze vast amounts of data in real-time, recognize subtle patterns indicative of cyber threats, and adapt to new attack strategies more efficiently than conventional approaches. However, despite AI’s potential, challenges remain regarding its effectiveness, ethical implications, and risks of adversarial manipulation. This research investigates the strengths and limitations of AI-driven cybersecurity by comparing AI-based security tools with traditional methods, identifying key advantages and vulnerabilities, and exploring ethical considerations. Additionally, a survey of cybersecurity professionals was conducted to assess expert opinions on AI’s role, effectiveness, and potential risks. By combining these insights with experimental testing and a comprehensive review of existing literature, this study provides a nuanced understanding of AI’s impact on cybersecurity and offers recommendations for optimizing its integration into modern security infrastructures.
more »
« less
- Award ID(s):
- 1754054
- PAR ID:
- 10623602
- Publisher / Repository:
- The 2025 ADMI Symposium.
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Cybersecurity is a concern for organizations in this era. However, strengthening the security of an organization’s internal network may not be sufficient since modern organizations depend on third parties, and these dependencies may open new attack paths to cybercriminals. Cyber Third-Party Risk Management (C-TPRM) is a relatively new concept in the business world. All vendors or partners possess a potential security vulnerability and threat. Even if an organization has the best cybersecurity practice, its data, customers, and reputation may be at risk because of a third party. Organizations seek effective and efficient methods to assess their partners’ cybersecurity risks. In addition to intrusive methods to assess an organization’s cybersecurity risks, such as penetration testing, non-intrusive methods are emerging to conduct C-TPRM more easily by synthesizing the publicly available information without requiring any involvement of the subject organization. In this study, the existing methods for C-TPRM built by different companies are presented and compared to discover the commonly used indicators and criteria for the assessments. Additionally, the results of different methods assessing the cybersecurity risks of a specific organization were compared to examine reliability and consistency. The results showed that even if there is a similarity among the results, the provided security scores do not entirely converge.more » « less
-
Background The rapid advancement of artificial intelligence (AI) is reshaping industrial workflows and workforce expectations. After its breakthrough year in 2023, AI has become ubiquitous, yet no standardized approach exists for integrating AI into engineering and computer science undergraduate curricula. Recent graduates find them- selves navigating evolving industry demands surrounding AI, often without formal preparation. The ways in which AI impacts their career decisions represent a critical perspective to support future students as graduates enter AI-friendly industries. Our work uses social cognitive career theory (SCCT) to qualitatively investigate how 14 recent engineering graduates working in a variety of industry sectors perceived the impact of AI on their careers and industries. Results Given the rapid and ongoing evolution of AI, findings suggested that SCCT may have limited applicability until AI technology has matured further. Many recent graduates lacked prior exposure to or a clear understanding of AI and its relevance to their professional roles. The timing of direct, practical exposure to AI emerged as a key influ- ence on how participants perceived AI’s impact on their career decisions. Participants emphasized a need for more customizable undergraduate curricula to align with industry trends and individual interests related to AI. While many acknowledged AI’s potential to enhance efficiency in data management and routine administrative tasks, they largely did not perceive AI as a direct threat to their core engineering functions. Instead, AI was viewed as a supplemen- tal tool requiring critical oversight. Despite interest in AI’s potential, most participants lacked the time or resources to independently pursue integrating AI into their professional roles. Broader concerns included ethical considerations, industry regulations, and the rapid pace of AI development. Conclusions This exploratory work highlights an urgent need for collaboration between higher education and industry leaders to more effectively integrate direct, hands-on experience with AI into engineering education. A personalized, context-driven approach to teaching AI that emphasizes ethical considerations and domain-specific applications would help better prepare students for evolving workforce expectations by highlighting AI’s relevance and limitations. This alignment would support more meaningful engagement with AI and empower future engineers to apply it responsibly and effectively in their fields.more » « less
-
Connected vehicle (CV) technology brings both opportunities and challenges to the traffic signal control (TSC) system. While safety and mobility performance could be greatly improved by adopting CV technologies, the connectivity between vehicles and transportation infrastructure may increase the risks of cyber threats. In the past few years, studies related to cybersecurity on the TSC systems were conducted. However, there still lacks a systematic investigation that provides a comprehensive analysis framework. In this study, our aim is to fill the research gap by proposing a comprehensive analysis framework for the cybersecurity problem of the TSC in the CV environment. With potential threats towards the major components of the system and their corresponding impacts on safety and efficiency analyzed, data spoofing attack is considered the most plausible and realistic attack approach. Based on this finding, different attack strategies and defense solutions are discussed. A case study is presented to show the impact of the data spoofing attacks towards a selected CV based TSC system and corresponding mitigation countermeasures. This case study is conducted on a hybrid security testing platform, with virtual traffic and a real V2X communication network. To the best of our knowledge, this is the first study to present a comprehensive analysis framework to the cybersecurity problem of the CV-based TSC systems.more » « less
-
Cybersecurity and Artificial Intelligence (AI) are key domains whose intersection gives great promises and poses significant threats. Indeed, the National Academy of Science (NAS), the National Science Foundation (NSF), and othßer respected entities have noted the significant role that AI can play in cybersecurity, and the importance of ensuring the security of AI-enabled algorithms and systems. This minitrack focuses on AI and Cybersecurity that works in broader domains, collaborative inter-organizational realms, shared collaborative domains, or with collaborative technologies. The papers in this minitrack have the potential to offer interesting and impactful solutions to emerging areas, including unmanned aerial vehicles and open source software security.more » « less
An official website of the United States government
