<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Sublinear quantum algorithms for training linear and kernel-based classifiers</dc:title><dc:creator>Li, Tongyang; Chakrabarti, Shouvanik; Wu, Xiaodi</dc:creator><dc:corporate_author/><dc:editor/><dc:description>We investigate quantum algorithms for classification, a fundamental problem in machine learning, with provable guarantees. Given n d-dimensional data points, the state-of-the-art (and optimal) classical algorithm for training classifiers with constant margin by Clarkson et al. runs in Õ (n+d), which is also optimal in its input/output model. We design sublinear quantum algorithms for the same task running in Õ (\sqrt{n}+\sqrt{d}), a quadratic improvement in both n and d. Moreover, our algorithms use the standard quantization of the classical input and generate the same classical output, suggesting minimal overheads when used as subroutines for end-to-end applications. We also demonstrate a tight lower bound (up to poly-log factors) and discuss the possibility of implementation on near-term quantum machines.</dc:description><dc:publisher/><dc:date>2019-06-01</dc:date><dc:nsf_par_id>10106372</dc:nsf_par_id><dc:journal_name>Proceedings of Machine Learning Research</dc:journal_name><dc:journal_volume>97</dc:journal_volume><dc:journal_issue/><dc:page_range_or_elocation>3815-3824</dc:page_range_or_elocation><dc:issn>2640-3498</dc:issn><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>1816695</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>