Crowdsourcing platforms have traditionally been designed with a focus on workstation interfaces, restricting the flexibility that crowdworkers need. Recognizing this limitation and the need for more adaptable platforms, prior research has highlighted the diverse work processes of crowdworkers, influenced by factors such as device type and work stage. However, these variables have largely been studied in isolation. Our study is the first to explore the interconnected variabilities among these factors within the crowdwork community. Through a survey involving 150 Amazon Mechanical Turk crowdworkers, we uncovered three distinct groups characterized by their interrelated variabilities in key work aspects. The largest group exhibits a reliance on traditional devices, showing limited interest in integrating smartphones and tablets into their work routines. The second-largest group also primarily uses traditional devices but expresses a desire for supportive tools and scripts that enhance productivity across all devices, particularly smartphones and tablets. The smallest group actively uses and strongly prefers non-workstation devices, especially smartphones and tablets, for their crowdworking activities. We translate our findings into design insights for platform developers, discussing the implications for creating more personalized, flexible, and efficient crowdsourcing environments. Additionally, we highlight the unique work practices of these crowdworker clusters, offering a contrast to those of more traditional and established worker groups. 
                        more » 
                        « less   
                    
                            
                            On Benchmarking for Crowdsourcing and Future of Work Platform
                        
                    
    
            Online crowdsourcing platforms have proliferated over the last few years and cover a number of important domains, these platforms include worker-task platforms such as Amazon Mechanical Turk, worker-for hire platforms such as TaskRabbit to specialized platforms with specific tasks such as ridesharing like Uber, Lyft, Ola, etc. An increasing proportion of the human workforce will be employed by these platforms in the near future. The crowdsourcing community has done yeoman’s work in designing effective algorithms for various key components, such as incentive design, task assignment, and quality control. Given the increasing importance of these crowdsourcing platforms, it is now time to design mechanisms so that it is easier to evaluate the effectiveness of these platforms. Specifically, we advocate developing benchmarks for crowdsourcing research. Benchmarks often identify important issues for the community to focus on and improve upon. This has played a key role in the development of research domains as diverse as databases and deep learning. We believe that developing appropriate benchmarks for crowdsourcing will ignite further innovations. However, crowdsourcing – and future of work, in general – is a very diverse field that makes developing benchmarks much more challenging. Substantial effort is needed that spans across developing benchmarks for datasets, metrics, algorithms, platforms, and so on. In this article, we initiate some discussion into this important problem and issue a call-to-arms for the community to work on this important initiative. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1840052
- PAR ID:
- 10187036
- Date Published:
- Journal Name:
- A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering
- ISSN:
- 1053-1238
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Targeting the right group of workers for crowdsourcing often achieves better quality results. One unique example of targeted crowdsourcing is seeking community-situated workers whose familiarity with the background and the norms of a particular group can help produce better outcome or accuracy. These community-situated crowd workers can be recruited in different ways from generic online crowdsourcing platforms or from online recovery communities. We evaluate three different approaches to recruit generic and community-situated crowd in terms of the time and the cost of recruitment, and the accuracy of task completion. We consider the context of Alcoholics Anonymous (AA), the largest peer support group for recovering alcoholics, and the task of identifying and validating AA meeting information. We discuss the benefits and trade-offs of recruiting paid vs. unpaid community-situated workers and provide implications for future research in the recovery context and relevant domains of HCI, and for the design of crowdsourcing ICT systems.more » « less
- 
            Efficient allocation of tasks to workers is a central problem in crowdsourcing. In this paper, we consider a special setting inspired by spatial crowdsourcing platforms where both workers and tasks arrive dynamically. Additionally, we assume all tasks are heterogeneous and each worker-task assignment brings a known reward. The natural challenge lies in how to incorporate the uncertainty in the arrivals from both workers and tasks into our online allocation policy such that the total expected reward is maximized. To formulate this, we assume the arrival patterns of worker “types” and task “types” can be predicted from historical data. Specifically, we consider a finite time horizon T and assume that in each time-step, a single worker and task are sampled (i.e., “arrive”) from two respective distributions independently, and that this sampling process repeats identically and independently for the entire T online time-steps. Our model, called Online Task Assignment with Two-Sided Arrival (OTA-TSA), is a significant generalization of the classical online task assignment where the set of tasks is assumed to be available offline. For the general version of OTA-TSA, we present an optimal non-adaptive algorithm which achieves an online competitive ratio of 0.295. For the special case of OTA-TSA where the reward is a function of just the worker type, we present an improved algorithm (which is adaptive) and achieves a competitive ratio of at least 0.343. On the hardness side, along with showing that the ratio obtained by our non-adaptive algorithm is the best possible among all non-adaptive algorithms, we further show that no (adaptive) algorithm can achieve a ratio better than 0.581 (unconditionally), even for the special case of OTA-TSA with homogenous tasks (i.e., all rewards are the same). At the heart of our analysis lies a new technical tool (which is a refined notion of the birth-death process), called the two-stage birth-death process, which may be of independent interest. Finally, we perform numerical experiments on two real-world datasets obtained from crowdsourcing platforms to complement our theoretical results.more » « less
- 
            As people increasingly innovate outside of formal R&D departments, individuals take on the responsibility of attracting, managing, and protecting social, financial, human, and information capital. With internet technology playing a central role in how individuals work together to produce something that they could not produce alone, it is necessary to understand how technologies are shaping the innovation process from start to finish. We bring together human-computer interaction researchers and industry leaders who have worked with people and platforms designed to support collective innovation across diverse domains. We will discuss the current and future research on the role of platforms in collective innovation, including topics in social computing, crowdsourcing, peer production, online communities, gig economy, & online marketplaces.more » « less
- 
            Attention check questions have become commonly used in online surveys published on popular crowdsourcing platforms as a key mechanism to filter out inattentive respondents and improve data quality. However, little research considers the vulnerabilities of this important quality control mechanism that can allow attackers including irresponsible and malicious respondents to automatically answer attention check questions for efficiently achieving their goals. In this paper, we perform the first study to investigate such vulnerabilities, and demonstrate that attackers can leverage deep learning techniques to pass attention check questions automatically. We propose AC-EasyPass, an attack framework with a concrete model, that combines convolutional neural network and weighted feature reconstruction to easily pass attention check questions. We construct the first attention check question dataset that consists of both original and augmented questions, and demonstrate the effectiveness of AC-EasyPass. We explore two simple defense methods, adding adversarial sentences and adding typos, for survey designers to mitigate the risks posed by AC-EasyPass; however, these methods are fragile due to their limitations from both technical and usability perspectives, underlining the challenging nature of defense. We hope our work will raise sufficient attention of the research community towards developing more robust attention check mechanisms. More broadly, our work intends to prompt the research community to seriously consider the emerging risks posed by the malicious use of machine learning techniques to the quality, validity, and trustworthiness of crowdsourcing and social computing.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    