<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Make the Pertinent Salient: Task-Relevant Reconstruction for Visual Control with Distractions</dc:title><dc:creator>Kim, Kyungmin Kim; Lanier, JB; Fox, Roy</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Model-Based Reinforcement Learning (MBRL) has shown promise in visual control tasks
due to its data efficiency. However, training MBRL agents to develop generalizable perception
remains challenging, especially amid visual distractions that introduce noise in representation
learning. We introduce Segmentation Dreamer (SD), a framework that facilitates representation
learning in MBRL by incorporating a novel auxiliary task. Assuming that task-relevant
components in images can be easily identified with prior knowledge in a given task, SD uses
segmentation masks on image observations to reconstruct only task-relevant regions, reducing
representation complexity. SD can leverage either ground-truth masks available in simulation
or potentially imperfect segmentation foundation models. The latter is further improved
by selectively applying the image reconstruction loss to mitigate misleading learning signals
from mask prediction errors. In modified DeepMind Control suite and Meta-World tasks with
added visual distractions, SD achieves significantly better sample efficiency and greater final
performance than prior work and is especially effective in sparse reward tasks that had been unsolvable
by prior work. We also validate its effectiveness in a real-world robotic lane-following
task when training with intentional distractions for zero-shot transfer.a</dc:description><dc:publisher>RLJ</dc:publisher><dc:date>2025-08-05</dc:date><dc:nsf_par_id>10615918</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn/><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>2321786</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>