<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Battling voice spoofing: a review, comparative analysis, and generalizability evaluation of state-of-the-art voice spoofing counter measures</dc:title><dc:creator>Khan, Awais; Malik, Khalid Mahmood; Ryan, James; Saravanan, Mikul</dc:creator><dc:corporate_author/><dc:editor/><dc:description>With the advent of automated speaker verifcation (ASV) systems comes an equal and
opposite development: malicious actors may seek to use voice spoofng attacks to fool
those same systems. Various counter measures have been proposed to detect these spoofing attacks, but current oferings in this arena fall short of a unifed and generalized
approach applicable in real-world scenarios. For this reason, defensive measures for ASV
systems produced in the last 6-7 years need to be classifed, and qualitative and quantitative comparisons of state-of-the-art (SOTA) counter measures should be performed to
assess the efectiveness of these systems against real-world attacks. Hence, in this work,
we conduct a review of the literature on spoofng detection using hand-crafted features,
deep learning, and end-to-end spoofng countermeasure solutions to detect logical access
attacks, such as speech synthesis and voice conversion, and physical access attacks, i.e.,
replay attacks. Additionally, we review integrated and unifed solutions to voice spoofng
evaluation and speaker verifcation, and adversarial and anti-forensic attacks on both voice
counter measures and ASV systems. In an extensive experimental analysis, the limitations
and challenges of existing spoofng counter measures are presented, the performance of
these counter measures on several datasets is reported, and cross-corpus evaluations are
performed, something that is nearly absent in the existing literature, in order to assess the
generalizability of existing solutions. For the experiments, we employ the ASVspoof2019,
ASVspoof2021, and VSDC datasets along with GMM, SVM, CNN, and CNN-GRU classifers. For reproducibility of the results, the code of the testbed can be found at our GitHub
Repository (https://github.com/smileslab/Comparative-Analysis-Voice-Spoofing).</dc:description><dc:publisher/><dc:date>2023-01-01</dc:date><dc:nsf_par_id>10427026</dc:nsf_par_id><dc:journal_name>Artificial Intelligence Review</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn>0269-2821</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1007/s10462-023-10539-8</dc:doi><dcq:identifierAwardId>1815724</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>