<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Vulnerability Analysis for Safe Reinforcement Learning in Cyber-Physical Systems</dc:title><dc:creator>Jiang, Shixiong (ORCID:0009000491372359); Liu, Mengyu (ORCID:0000000235329506); Kong, Fanxin (ORCID:0000000164883488)</dc:creator><dc:corporate_author/><dc:editor/><dc:description>&lt;p&gt;Safe reinforcement learning (safe RL) has been applied to synthesize control policies that maximize task rewards while adhering to safety constraints within simulated secure cyber-physical systems. However, the vulnerability of safe RL to adversarial attacks remains largely unexplored. We argue that understanding the safety vulnerabilities of learned control policies is crucial for ensuring true safety in real-world scenarios. To address this gap, we first formally define the safe RL problem with formal language (Signal temporal logic), and demonstrate that even optimal policies are susceptible to observation perturbations. We then introduce novel safety violation attacks that exploit adversarial models trained with reversed safety constraints to induce unsafe behaviors. Lastly, through both theoretical analysis and experimental results, we demonstrate that our approach is more effective at violating safety constraints than existing adversarial RL methods, which primarily focus on reducing task rewards rather than compromising safety.&lt;/p&gt;</dc:description><dc:publisher>ACM</dc:publisher><dc:date>2026-01-19</dc:date><dc:nsf_par_id>10670644</dc:nsf_par_id><dc:journal_name>ACM Transactions on Cyber-Physical Systems</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn>2378-962X</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1145/3788281</dc:doi><dcq:identifierAwardId>2442914; 2333980</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>