<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Iterative Bayesian Learning for Crowdsourced Regression</dc:title><dc:creator>Ok, Jungseul; Oh, Sewoong; Jang, Yunhun; Shin, Jinwoo; Yi, Yung</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Crowdsourcing platforms emerged as popular
venues for purchasing human intelligence at
low cost for large volume of tasks. As many
low-paid workers are prone to give noisy answers, a common practice is to add redundancy by assigning multiple workers to each
task and then simply average out these answers. However, to fully harness the wisdom
of the crowd, one needs to learn the heterogeneous quality of each worker. We resolve
this fundamental challenge in crowdsourced
regression tasks, i.e., the answer takes continuous labels, where identifying good or bad
workers becomes much more non-trivial compared to a classification setting of discrete labels. In particular, we introduce a Bayesian
iterative scheme and show that it provably
achieves the optimal mean squared error.
Our evaluations on synthetic and real-world
datasets support our theoretical results and
show the superiority of the proposed scheme.</dc:description><dc:publisher/><dc:date>2019-04-01</dc:date><dc:nsf_par_id>10105879</dc:nsf_par_id><dc:journal_name>Proceedings of Machine Learning Research</dc:journal_name><dc:journal_volume>89</dc:journal_volume><dc:journal_issue/><dc:page_range_or_elocation>1486-1495</dc:page_range_or_elocation><dc:issn>2640-3498</dc:issn><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>1929955</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>