<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Exploring “Just Noticeable” Group Fairness in Rankings</dc:title><dc:creator>Alkhathlan, Mallak; Shrestha, Hilson; Harrison, Lane; Rundensteiner, Elke</dc:creator><dc:corporate_author/><dc:editor/><dc:description>The plethora of fairness metrics developed for ranking-based
decision-making raises the question: which metrics align best
with people’s perceptions of fairness, and why? Most prior
studies examining people’s perceptions of fairness metrics
tend to use ordinal rating scales (e.g., Likert scales). However,
such scales can be ambiguous in their interpretation across
participants, and can be influenced by interface features used
to capture responses.We address this gap by exploring the use
of two-alternative forced choice methodologies— used extensively
outside the fairness community for comparing visual
stimuli— to quantitatively compare participant perceptions
across fairness metrics and ranking characteristics. We report
a crowdsourced experiment with 224 participants across four
conditions: two alternative rank fairness metrics, ARP and
NDKL, and two ranking characteristics, lists of 20 and 100
candidates, resulting in over 170,000 individual judgments.
Quantitative results show systematic differences in how people
interpert these metrics, and surprising exceptions where
fairness metrics disagree with people’s perceptions. Qualitative
analyses of participant comments reveals an interplay
between cognitive and visual strategies that affects people’s
perceptions of fairness. From these results, we discuss future
work in aligning fairness metrics with people’s perceptions,
and highlight the need and benefits of expanding methodologies
for fairness studies.</dc:description><dc:publisher>Association for the Advancement of Artificial Intelligence (www.aaai.org).</dc:publisher><dc:date>2025-10-20</dc:date><dc:nsf_par_id>10634931</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn/><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>2007932</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location>AIES 2025, Madrid, Spain</dc:location><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>