skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, Ben"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. When interacting with information retrieval (IR) systems, users, affected by confirmation biases, tend to select search results that confirm their existing beliefs on socially significant contentious issues. To understand the judgments and attitude changes of users searching online, our study examined how cognitively biased users interact with algorithmically biased search engine result pages (SERPs). We designed three-query search sessions on debated topics under various bias conditions. We recruited 1,321 crowdsourcing participants and explored their attitude changes, search interactions, and the effects of confirmation bias. Three key findings emerged: 1) most attitude changes occur in the initial query of a search session; 2) Confirmation bias and result presentation on SERPs affect the number and depth of clicks in the current query and perceived familiarity with clicked results in subsequent queries; 3) The bias position also affects attitude changes of users with lower perceived openness to conflicting opinions. Our study goes beyond traditional simulation-based evaluation settings and simulated rational users, sheds light on the mixed effects of human biases and algorithmic biases in information retrieval tasks on debated topics, and can inform the design of bias-aware user models, human-centered bias mitigation techniques, and socially responsible intelligent IR systems. 
    more » « less
  2. Large language model (LLM) applications, such as ChatGPT, are a powerful tool for online information-seeking (IS) and problem-solving tasks. However, users still face challenges initializing and refining prompts, and their cognitive barriers and biased perceptions further impede task completion. These issues reflect broader challenges identified within the fields of IS and interactive information retrieval (IIR). To address these, our approach integrates task context and user perceptions into human-ChatGPT interactions through prompt engineering. We developed a ChatGPT-like platform integrated with supportive functions, including perception articulation, prompt suggestion, and conversation explanation. Our findings of a user study demonstrate that the supportive functions help users manage expectations, reduce cognitive loads, better refine prompts, and increase user engagement. This research enhances our comprehension of designing proactive and user-centric systems with LLMs. It offers insights into evaluating human-LLM interactions and emphasizes potential challenges for under served users. 
    more » « less
  3. ABSTRACT User search performance is multidimensional in nature and may be better characterized by metrics that depict users' interactions with both relevant and irrelevant results. Despite previous research on one‐dimensional measures, it is still unclear how to characterize different dimensions of user performance and leverage the knowledge in developing proactive recommendations. To address this gap, we propose and empirically test a framework of search performance evaluation and build early performance prediction models to simulate proactive search path recommendations. Experimental results from four datasets of diverse types (1,482 sessions and 5,140 query segments from both controlled lab and natural settings) demonstrate that: 1) Cluster patterns characterized by cost‐gain‐based multifaceted metrics can effectively differentiate high‐performing users from other searchers, which form the empirical basis for proactive recommendations; 2) whole‐session performance can be reliably predicted at early stages of sessions (e.g., first and second queries); 3) recommendations built upon the search paths of system‐identified high‐performing searchers can significantly improve the search performance of struggling users. Experimental results demonstrate the potential of our approach for leveraging collective wisdom from automatically identified high‐performance user groups in developing and evaluating proactive in‐situ search recommendations. 
    more » « less
  4. Abstract Understanding the roles ofsearch gainandcostin users' search decision‐making is a key topic in interactive information retrieval (IIR). While previous research has developed user models based onsimulatedgains and costs, it is unclear how users' actualperceptions of search gains and costsform and change during search interactions. To address this gap, our study adopted expectation‐confirmation theory (ECT) to investigate users' perceptions of gains and costs. We re‐analyzed data from our previous study, examining how contextual and search features affect users' perceptions and how their expectation‐confirmation states impact their following searches. Our findings include: (1) The point where users' actual dwell time meets their constant expectation may serve as a reference point in evaluating perceived gain and cost; (2) these perceptions are associated with in situ experience represented by usefulness labels, browsing behaviors, and queries; (3) users' current confirmation states affect their perceptions of Web page usefulness in the subsequent query. Our findings demonstrate possible effects of expectation‐confirmation, prospect theory, and information foraging theory, highlighting the complex relationships among gain/cost, expectations, and dwell time at the query level, and the reference‐dependent expectation at the session level. These insights enrich user modeling and evaluation in human‐centered IR. 
    more » « less