skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2207019

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 23, 2026
  2. The Heilmeier Catechism consists of a set of questions that researchers and practitioners can consider when formulating research and applied engineering projects. In this article, we suggest explicitly asking who is included and who is left out of consideration. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  3. The present and future transition of lives and activities into virtual worlds --- worlds in which people interact using avatars --- creates novel privacy challenges and opportunities. Avatars present an opportunity for people to control the way they are represented to other users and the information shared or implied by that representation. Importantly, users with marginalized identities may have a unique set of concerns when choosing what information about themselves (and their identities) to conceal or expose in an avatar. We present a theoretical basis, supported by two empirical studies, to understand how marginalization impacts the ways in which people create avatars and perceive others' avatars: what information do people choose to reveal or conceal, and how do others react to these choices? In Study 1, participants from historically marginalized backgrounds felt more concerned about being devalued based on their identities in virtual worlds, which related to a lower desire to reveal their identities in an avatar, compared to non-marginalized participants. However, in Study 2 participants were often uncomfortable with others changing visible characteristics in an avatar, weighing concerns about others' anonymity with possible threats to their own safety and security online. Our findings demonstrate asymmetries in what information people prefer the self vs. others to reveal in their online representations: participants want privacy for themselves but to feel informed about others. Although avatars allow people to choose what information to reveal about themselves, people from marginalized backgrounds may still face backlash for concealing components of their identities to avoid harm. 
    more » « less
    Free, publicly-accessible full text available April 1, 2026
  4. We applied techniques from psychology --- typically used to visualize human bias --- to facial analysis systems, providing novel approaches for diagnosing and communicating algorithmic bias. First, we aggregated a diverse corpus of human facial images (N=1492) with self-identified gender and race. We tested four automated gender recognition (AGR) systems and found that some exhibited intersectional gender-by-race biases. Employing a technique developed by psychologists --- face averaging --- we created composite images to visualize these systems' outputs. For example, we visualized what an average woman looks like, according to a system's output. Second, we conducted two online experiments wherein participants judged the bias of hypothetical AGR systems. The first experiment involved participants (N=228) from a convenience sample. When depicting the same results in different formats, facial visualizations communicated bias to the same magnitude as statistics. In the second experiment with only Black participants (N=223), facial visualizations communicated bias significantly more than statistics, suggesting that face averages are meaningful for communicating algorithmic bias. 
    more » « less