<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Rethinking Score Distillation as a Bridge Between Image Distributions</dc:title><dc:creator>McAllister, David; Ge, Songwei; Huang, Jia-Bin; Jacobs, David; Efros, Alexei; Holynski, Aleksander; Kanazawa, Angjoo</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Score distillation sampling (SDS) has proven to be an important tool, enabling
the use of large-scale diffusion priors for tasks operating in data-poor domains.
Unfortunately, SDS has a number of characteristic artifacts that limit its usefulness in general-purpose applications. In this paper, we make progress toward
understanding the behavior of SDS and its variants by viewing them as solving
an optimal-cost transport path from a source distribution to a target distribution.
Under this new interpretation, these methods seek to transport corrupted images
(source) to the natural image distribution (target). We argue that current methods’
characteristic artifacts are caused by (1) linear approximation of the optimal path
and (2) poor estimates of the source distribution. We show that calibrating the text
conditioning of the source distribution can produce high-quality generation and
translation results with little extra overhead. Our method can be easily applied
across many domains, matching or beating the performance of specialized methods.
We demonstrate its utility in text-to-2D, text-based NeRF optimization, translating
paintings to real images, optical illusion generation, and 3D sketch-to-real. We
compare our method to existing approaches for score distillation sampling and
show that it can produce high-frequency details with realistic colors.</dc:description><dc:publisher>Neural Information Processing Systems Foundation, Inc. (NeurIPS)</dc:publisher><dc:date>2024-12-16</dc:date><dc:nsf_par_id>10646782</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>33779 to 33804</dc:page_range_or_elocation><dc:issn/><dc:isbn/><dc:doi>https://doi.org/10.52202/079017-1064</dc:doi><dcq:identifierAwardId>2213335</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location>Vancouver, Canada</dc:location><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>