<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Realizable Continuous-Space Shields for Safe Reinforcement Learning</dc:title><dc:creator>Kyungmin, Kim; Corsi, Davide; Rodriguez, Andoni; Lanier, JB; Parellada, Benjami; Baldi, Pierre; Sanchez, Cesar; Fox, Roy</dc:creator><dc:corporate_author/><dc:editor/><dc:description>While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains,
it remains vulnerable to occasional catastrophic failures without additional safeguards. An
effective solution to prevent these failures is to use a shield that validates and adjusts the agent’s
actions to ensure compliance with a provided set of safety specifications. For real-world robotic
domains, it is essential to define safety specifications over continuous state and action spaces to
accurately account for system dynamics and compute new actions that minimally deviate from the
agent’s original decision. In this paper, we present the first shielding approach specifically designed
to ensure the satisfaction of safety requirements in continuous state and action spaces, making it
suitable for practical robotic applications. Our method builds upon realizability, an essential
property that confirms the shield will always be able to generate a safe action for any state in
the environment. We formally prove that realizability can be verified for stateful shields,
enabling the incorporation of non-Markovian safety requirements, such as loop avoidance. Finally,
we demonstrate the effectiveness of our approach in ensuring safety without compromising the policy’s
success rate by applying it to a navigation problem and a multi-agent particle environment1.
Keywords: Shielding, Reinforcement Learning, Safety, Robotics</dc:description><dc:publisher>7th Annual Conference on Learning for Dynamics and Control</dc:publisher><dc:date>2025-06-04</dc:date><dc:nsf_par_id>10614736</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn/><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>2321786</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>