Citizen science projects face a dilemma in relying on contributions from volunteers to achieve their scientific goals: providing volunteers with explicit training might increase the quality of contributions, but at the cost of losing the work done by newcomers during the training period, which for many is the only work they will contribute to the project. Based on research in cognitive science on how humans learn to classify images, we have designed an approach to use machine learning to guide the presentation of tasks to newcomers that help them more quickly learn how to do the image classification task while still contributing to the work of the project. A Bayesian model for tracking volunteer learning is presented. 
                        more » 
                        « less   
                    
                            
                            "Guess what! You're the First to See this Event": Increasing Contribution to Online Production Communities
                        
                    
    
            In this paper, we describe the results of an online field experiment examining the impacts of messaging about task novelty on the volume of volunteers’ contributions to an online citizen science project. Encouraging volunteers to provide a little more content as they work is an attractive strategy to increase the community’s output. Prior research found that an important motivation for participation in online citizen science is the wonder of being the first person to observe a particular image. To appeal to this motivation, a pop-up message was added to an online citizen science project that alerted volunteers when they were the first to annotate a particular image. Our analysis reveals that new volunteers who saw these messages increased the volume of annotations they contributed. The results of our study suggest an additional strategy to increase the amount of work volunteers contribute to online communities and citizen science projects specifically. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1547880
- PAR ID:
- 10026453
- Date Published:
- Journal Name:
- GROUP '16: Proceedings of the 19th International Conference on Supporting Group Work
- Page Range / eLocation ID:
- 171 to 179
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Data work is often completed by crowdworkers, who are routinely dehumanized, disempowered, and sidelined. We turn to citizen science to reimagine data work, highlighting collaborative relationships between citizen science project managers and volunteers. Though citizen science and traditional crowd work entail similar forms of data work, such as classifying or transcribing large data sets, citizen science relies on volunteer contributions rather than paid data work. We detail the work citizen science project managers did to shape volunteer experiences: aligning science goals, minimizing barriers to participation, engaging communities, communicating with volunteers, providing training and education, rewarding contributions, and reflecting on volunteer work. These management strategies created opportunities for meaningful work by cultivating intrinsic motivation and fostering collaborative work relationships but ultimately limited participation to specific data-related tasks. We recommend management tactics and task design strategies for creating meaningful work for invisible collar workers, an understudied class of labor in CSCW.more » « less
- 
            In citizen science, participants’ productivity is imperative to project success. We investigate the feasibility of a collaborative approach to citizen science, within which productivity is enhanced by capitalizing on the diversity of individual attributes among participants. Specifically, we explore the possibility of enhancing productivity by integrating multiple individual attributes to inform the choice of which task should be assigned to which individual. To that end, we collect data in an online citizen science project composed of two task types: (i) filtering images of interest from an image repository in a limited time, and (ii) allocating tags on the object in the filtered images over unlimited time. The first task is assigned to those who have more experience in playing action video games, and the second task to those who have higher intrinsic motivation to participate. While each attribute has weak predictive power on the task performance, we demonstrate a greater increase in productivity when assigning participants to the task based on a combination of these attributes. We acknowledge that such an increase is modest compared to the case where participants are randomly assigned to the tasks, which could offset the effort of implementing our attribute-based task assignment scheme. This study constitutes a first step toward understanding and capitalizing on individual differences in attributes toward enhancing productivity in collaborative citizen science.more » « less
- 
            Abstract The bulk of research on citizen science participants is project centric, based on an assumption that volunteers experience a single project. Contrary to this assumption, survey responses (n = 3894) and digital trace data (n = 3649) from volunteers, who collectively engaged in 1126 unique projects, revealed that multiproject participation was the norm. Only 23% of volunteers were singletons (who participated in only one project). The remaining multiproject participants were split evenly between discipline specialists (39%) and discipline spanners (38% joined projects with different disciplinary topics) and unevenly between mode specialists (52%) and mode spanners (25% participated in online and offline projects). Public engagement was narrow: The multiproject participants were eight times more likely to be White and five times more likely to hold advanced degrees than the general population. We propose a volunteer-centric framework that explores how the dynamic accumulation of experiences in a project ecosystem can support broad learning objectives and inclusive citizen science.more » « less
- 
            Abstract We present the first results from Citizen ASAS-SN, a citizen science project for the All-Sky Automated Survey for Supernovae (ASAS-SN) hosted on the Zooniverse platform. Citizen ASAS-SN utilizes the newer, deeper, higher cadence ASAS-SN g -band data and tasks volunteers to classify periodic variable star candidates based on their phased light curves. We started from 40,640 new variable candidates from an input list of ∼7.4 million stars with δ < −60° and the volunteers identified 10,420 new discoveries which they classified as 4234 pulsating variables, 3132 rotational variables, 2923 eclipsing binaries, and 131 variables flagged as Unknown. They classified known variable stars with an accuracy of 89% for pulsating variables, 81% for eclipsing binaries, and 49% for rotational variables. We examine user performance, agreement between users, and compare the citizen science classifications with our machine learning classifier updated for the g -band light curves. In general, user activity correlates with higher classification accuracy and higher user agreement. We used the user’s “Junk” classifications to develop an effective machine learning classifier to separate real from false variables, and there is a clear path for using this “Junk” training set to significantly improve our primary machine learning classifier. We also illustrate the value of Citizen ASAS-SN for identifying unusual variables with several examples.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    