<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Near Optimal Behavior via Approximate State Abstraction</dc:title><dc:creator>Abel, David; Hershkowitz, D. Ellis; Littman, Michael L.</dc:creator><dc:corporate_author/><dc:editor/><dc:description>The combinatorial explosion that plagues planning
and reinforcement learning (RL) algorithms
can be moderated using state abstraction. Prohibitively
large task representations can be condensed
such that essential information is preserved,
and consequently, solutions are tractably
computable. However, exact abstractions, which
treat only fully-identical situations as equivalent,
fail to present opportunities for abstraction in environments
where no two situations are exactly
alike. In this work, we investigate approximate
state abstractions, which treat nearly-identical
situations as equivalent. We present theoretical
guarantees of the quality of behaviors derived
from four types of approximate abstractions. Additionally,
we empirically demonstrate that approximate
abstractions lead to reduction in task
complexity and bounded loss of optimality of behavior
in a variety of environments.</dc:description><dc:publisher/><dc:date>2016-01-01</dc:date><dc:nsf_par_id>10026422</dc:nsf_par_id><dc:journal_name>ICML</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn/><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>1637614</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>