<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Guided Policy Search for Stabilizing Contact-rich Motion Plans</dc:title><dc:creator>Dagher, Christopher; Silva, Chandika; Satici, Aykut C; Poonawala, Hasan A</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Learning policies for contact-rich manipulation is a challenging problem due to the presence of multiple contact modes with different dynamics, which complicates state and action exploration. Contact-rich motion planning uses simplified dynamics to reduce the search space dimension, but the found plans are then difficult to execute under the true object-manipulator dynamics. This paper presents an algorithm for learning controllers based on guided policy search, where motion plans based on simplified dynamics define rewards and sampling distributions for policy gradient-based learning. We demonstrate that our guided policy search method improves the ability to learn manipulation controllers, through a task involving pushing a box over a step.</dc:description><dc:publisher>Elsevier</dc:publisher><dc:date>2024-01-01</dc:date><dc:nsf_par_id>10595917</dc:nsf_par_id><dc:journal_name>IFAC-PapersOnLine</dc:journal_name><dc:journal_volume>58</dc:journal_volume><dc:journal_issue>28</dc:journal_issue><dc:page_range_or_elocation>1019 to 1024</dc:page_range_or_elocation><dc:issn>2405-8963</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1016/j.ifacol.2025.01.130</dc:doi><dcq:identifierAwardId>2330794</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>