<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Weakly-Supervised Object Representation Learning for Few-Shot Semantic Segmentation</dc:title><dc:creator>Ying, Xiaowen; Li, Xin; Chuah, Mooi Choo</dc:creator><dc:corporate_author/><dc:editor>null</dc:editor><dc:description>Training a semantic segmentation model requires large
densely-annotated image datasets that are costly to obtain.
Once the training is done, it is also difficult to add new ob-
ject categories to such segmentation models. In this pa-
per, we tackle the few-shot semantic segmentation prob-
lem, which aims to perform image segmentation task on un-
seen object categories merely based on one or a few sup-
port example(s). The key to solving this few-shot segmen-
tation problem lies in effectively utilizing object informa-
tion from support examples to separate target objects from
the background in a query image. While existing meth-
ods typically generate object-level representations by av-
eraging local features in support images, we demonstrate
that such object representations are typically noisy and less
distinguishing. To solve this problem, we design an ob-
ject representation generator (ORG) module which can ef-
fectively aggregate local object features from support im-
age(s) and produce better object-level representation. The
ORG module can be embedded into the network and trained
end-to-end in a weakly-supervised fashion without extra hu-
man annotation. We incorporate this design into a modified
encoder-decoder network to present a powerful and efficient
framework for few-shot semantic segmentation. Experimen-
tal results on the Pascal-VOC and MS-COCO datasets show
that our approach achieves better performance compared to
existing methods under both one-shot and five-shot settings.</dc:description><dc:publisher/><dc:date>2021-01-01</dc:date><dc:nsf_par_id>10286878</dc:nsf_par_id><dc:journal_name>IEEE Winter Conference on Applications of Computer Vision</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>1497-1506</dc:page_range_or_elocation><dc:issn>2472-6796</dc:issn><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>1931867</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>