- PAR ID:
- 10374260
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
- Volume:
- 10
- Issue:
- 1
- ISSN:
- 2769-1330
- Page Range / eLocation ID:
- 195 to 206
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)The artificial intelligence (AI) industry has created new jobs that are essential to the real world deployment of intelligent systems. Part of the job focuses on labeling data for machine learning models or having workers complete tasks that AI alone cannot do. These workers are usually known as ‘crowd workers’—they are part of a large distributed crowd that is jointly (but separately) working on the tasks although they are often invisible to end-users, leading to workers often being paid below minimum wage and having limited career growth. In this chapter, we draw upon the field of human–computer interaction to provide research methods for studying and empowering crowd workers. We present our Computational Worker Leagues which enable workers to work towards their desired professional goals and also supply quantitative information about crowdsourcing markets. This chapter demonstrates the benefits of this approach and highlights important factors to consider when researching the experiences of crowd workers.more » « less
-
This paper explores the application of sensemaking theory to support non-expert crowds in intricate data annotation tasks. We investigate the influence of procedural context and data context on the annotation quality of novice crowds, defining procedural context as completing multiple related annotation tasks on the same data point, and data context as annotating multiple data points with semantic relevance. We conducted a controlled experiment involving 140 non-expert crowd workers, who generated 1400 event annotations across various procedural and data context levels. Assessments of annotations demonstrate that high procedural context positively impacts annotation quality, although this effect diminishes with lower data context. Notably, assigning multiple related tasks to novice annotators yields comparable quality to expert annotations, without costing additional time or effort. We discuss the trade-offs associated with procedural and data contexts and draw design implications for engaging non-experts in crowdsourcing complex annotation tasks.
-
As AI-based face recognition technologies are increasingly adopted for high-stakes applications like locating suspected criminals, public concerns about the accuracy of these technologies have grown as well. These technologies often present a human expert with a shortlist of high-confidence candidate faces from which the expert must select correct match(es) while avoiding false positives, which we term the “last-mile problem.” We propose Second Opinion, a web-based software tool that employs a novel crowdsourcing workflow inspired by cognitive psychology, seed-gather-analyze, to assist experts in solving the last-mile problem. We evaluated Second Opinion with a mixed-methods lab study involving 10 experts and 300 crowd workers who collaborate to identify people in historical photos. We found that crowds can eliminate 75% of false positives from the highest-confidence candidates suggested by face recognition, and that experts were enthusiastic about using Second Opinion in their work. We also discuss broader implications for crowd–AI interaction and crowdsourced person identification.more » « less
-
null (Ed.)Crowdsourced content creation like articles or slogans can be powered by crowds of volunteers or workers from paid task markets. Volunteers often have expertise and are intrinsically motivated, but are a limited resource, and are not always reliably available. On the other hand, paid crowd workers are reliably available, can be guided to produce high-quality content, but cost money. How can these different populations of crowd workers be leveraged together to power cost-effective yet high-quality crowd-powered content-creation systems? To answer this question, we need to understand the strengths and weaknesses of each. We conducted an online study where we hired paid crowd workers and recruited volunteers from social media to complete three content creation tasks for three real-world non-profit organizations that focus on empowering women. These tasks ranged in complexity from simply generating keywords or slogans to creating a draft biographical article. Our results show that paid crowds completed work and structured content following editorial guidelines more effectively. However, volunteer crowds provide content that is more original. Based on the findings, we suggest that crowd-powered content-creation systems could gain the best of both worlds by leveraging volunteers to scaffold the direction that original content should take; while having paid crowd workers structure content and prepare it for real world use.more » « less
-
Individuals and organizations increasingly use online plat- forms to broadcast difficult problems to crowds. According to the “wisdom of the crowd” because crowds are so large they are able to bring together many diverse experts, effectively pool distributed knowledge, and thus solve challenging problems. In this study we test whether crowds of increasing size, from 4 to 32 members, perform better on a classic psychology problem that requires pooling distributed facts.more » « less