skip to main content


Search for: All records

Award ID contains: 1527453

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Identifying historical photographs of people can generate significant cultural and economic value, but misidentifications can cause harms such as falsifying the historical record, spreading disinformation, and feeding conspiracy theories. In this paper, we introduce DoubleCheck, a quality assessment framework based on the concepts of information provenance and stewardship for verifying historical photo identifications. We built and evaluated DoubleCheck on Civil War Photo Sleuth (CWPS), a popular online community dedicated to identifying photos from the American CivilWar era (1861- 65) using facial recognition and crowdsourcing. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)

    Identifying people in historical photographs is important for interpreting material culture, correcting the historical record, and creating economic value, but it is also a complex and challenging task. In this paper, we focus on identifying portraits of soldiers who participated in the American Civil War (1861-65). Millions of these portraits survive, but only 10-20% are identified. We created Photo Sleuth, a web-based platform that combines crowdsourced human expertise and automated face recognition to support Civil War portrait identification. Our mixed-methods evaluation of Photo Sleuth one month after its public launch showed that it helped users successfully identify unknown portraits.

     
    more » « less
  4. null (Ed.)
    Capturing analytic provenance is important for refining sensemaking analysis. However, understanding this provenance can be difficult. First, making sense of the reasoning in intermediate steps is time-consuming. Especially in distributed sensemaking, the provenance is less cohesive because each analyst only sees a small portion of the data without an understanding of the overall collaboration workflow. Second, analysis errors from one step can propagate to later steps. Furthermore, in exploratory sensemaking, it is difficult to define what an error is since there are no correct answers to reference. In this paper, we explore provenance analysis for distributed sensemaking in the context of crowdsourcing, where distributed analysis contributions are captured in microtasks. We propose crowd auditing as a way to help individual analysts visualize and trace provenance to debug distributed sensemaking. To evaluate this concept, we implemented a crowd auditing tool, CrowdTrace. Our user study-based evaluation demonstrates that CrowdTrace offers an effective mechanism to audit and refine multi-step crowd sensemaking 
    more » « less
  5. Identifying people in historical photographs is important for preserving material culture, correcting the historical record, and creating economic value, but it is also a complex and challenging task. In this paper, we focus on identifying portraits of soldiers who participated in the American Civil War (1861-65), the first widely-photographed conflict. Many thousands of these portraits survive, but only 10--20% are identified. We created Photo Sleuth, a web-based platform that combines crowdsourced human expertise and automated face recognition to support Civil War portrait identification. Our mixed-methods evaluation of Photo Sleuth one month after its public launch showed that it helped users successfully identify unknown portraits and provided a sustainable model for volunteer contribution. We also discuss implications for crowd-AI interaction and person identification pipelines. 
    more » « less
  6. null (Ed.)
    Misinformation poses a threat to public health, safety, and democracy. Training novices to debunk visual misinformation with image verification techniques has shown promise, yet little is known about how novices do so in the wild, and what methods prove effective. Thus, we studied 225 verification challenges posted by experts on Twitter over one year with the aim of improving novices’ skills. We collected, annotated, and analyzed these challenges and over 3,100 replies by 304 unique participants. We find that novices employ multiple tools and approaches, and techniques like collaboration and reverse image search significantly improve performance. 
    more » « less
  7. As AI-based face recognition technologies are increasingly adopted for high-stakes applications like locating suspected criminals, public concerns about the accuracy of these technologies have grown as well. These technologies often present a human expert with a shortlist of high-confidence candidate faces from which the expert must select correct match(es) while avoiding false positives, which we term the “last-mile problem.” We propose Second Opinion, a web-based software tool that employs a novel crowdsourcing workflow inspired by cognitive psychology, seed-gather-analyze, to assist experts in solving the last-mile problem. We evaluated Second Opinion with a mixed-methods lab study involving 10 experts and 300 crowd workers who collaborate to identify people in historical photos. We found that crowds can eliminate 75% of false positives from the highest-confidence candidates suggested by face recognition, and that experts were enthusiastic about using Second Opinion in their work. We also discuss broader implications for crowd–AI interaction and crowdsourced person identification. 
    more » « less
  8. Many types of investigative work involve verifying the legitimacy of visual evidence by identifying the precise geographic location where a photo or video was taken. Professional geolocation is often a manual, time-consuming process that can involve searching large areas of satellite imagery for potential matches. In this paper, we explore how crowdsourcing can be used to support expert image geolocation. We adapt an expert diagramming technique to overcome spatial reasoning limitations of novice crowds so that they can support an expert's search. In an experiment (n=540), we found that diagrams work significantly better than ground-level photos and allow crowds to reduce a search area by half before any expert intervention. We also discuss hybrid approaches to complex image analysis combining crowds, experts, and computer vision. 
    more » « less
  9. Geolocation, the process of identifying the precise location in the world where a photo or video was taken, is central to many types of investigative work, from debunking fake news posted on social media to locating terrorist training camps. Professional geolocation is often a manual, time-consuming process that involves searching large areas of satellite imagery for potential matches. In this paper, we explore how crowdsourcing can be used to support expert image geolocation. We adapt an expert diagramming technique to overcome spatial reasoning limitations of novice crowds, allowing them to support an expert’s search. In two experiments (n=1080), we found that diagrams work significantly better than ground-level photos and allow crowds to reduce a search area by half before any expert intervention. We also discuss hybrid approaches to complex image analysis combining crowds, experts, and computer vision. 
    more » « less