skip to main content


Title: "Guess what! You're the First to See this Event": Increasing Contribution to Online Production Communities
In this paper, we describe the results of an online field experiment examining the impacts of messaging about task novelty on the volume of volunteers’ contributions to an online citizen science project. Encouraging volunteers to provide a little more content as they work is an attractive strategy to increase the community’s output. Prior research found that an important motivation for participation in online citizen science is the wonder of being the first person to observe a particular image. To appeal to this motivation, a pop-up message was added to an online citizen science project that alerted volunteers when they were the first to annotate a particular image. Our analysis reveals that new volunteers who saw these messages increased the volume of annotations they contributed. The results of our study suggest an additional strategy to increase the amount of work volunteers contribute to online communities and citizen science projects specifically.  more » « less
Award ID(s):
1547880
NSF-PAR ID:
10026453
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
GROUP '16: Proceedings of the 19th International Conference on Supporting Group Work
Page Range / eLocation ID:
171 to 179
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In citizen science, participants’ productivity is imperative to project success. We investigate the feasibility of a collaborative approach to citizen science, within which productivity is enhanced by capitalizing on the diversity of individual attributes among participants. Specifically, we explore the possibility of enhancing productivity by integrating multiple individual attributes to inform the choice of which task should be assigned to which individual. To that end, we collect data in an online citizen science project composed of two task types: (i) filtering images of interest from an image repository in a limited time, and (ii) allocating tags on the object in the filtered images over unlimited time. The first task is assigned to those who have more experience in playing action video games, and the second task to those who have higher intrinsic motivation to participate. While each attribute has weak predictive power on the task performance, we demonstrate a greater increase in productivity when assigning participants to the task based on a combination of these attributes. We acknowledge that such an increase is modest compared to the case where participants are randomly assigned to the tasks, which could offset the effort of implementing our attribute-based task assignment scheme. This study constitutes a first step toward understanding and capitalizing on individual differences in attributes toward enhancing productivity in collaborative citizen science. 
    more » « less
  2. Citizen science projects face a dilemma in relying on contributions from volunteers to achieve their scientific goals: providing volunteers with explicit training might increase the quality of contributions, but at the cost of losing the work done by newcomers during the training period, which for many is the only work they will contribute to the project. Based on research in cognitive science on how humans learn to classify images, we have designed an approach to use machine learning to guide the presentation of tasks to newcomers that help them more quickly learn how to do the image classification task while still contributing to the work of the project. A Bayesian model for tracking volunteer learning is presented. 
    more » « less
  3. null (Ed.)
    Unoccupied Aerial Vehicles (UAVs), or drone technologies, with their high spatial resolution, temporal flexibility, and ability to repeat photogrammetry, afford a significant advancement in other remote sensing approaches for coastal mapping, habitat monitoring, and environmental management. However, geographical drone mapping and in situ fieldwork often come with a steep learning curve requiring a background in drone operations, Geographic Information Systems (GIS), remote sensing and related analytical techniques. Such a learning curve can be an obstacle for field implementation for researchers, community organizations and citizen scientists wishing to include introductory drone operations into their work. In this study, we develop a comprehensive drone training program for research partners and community members to use cost-effective, consumer-quality drones to engage in introductory drone mapping of coastal seagrass monitoring sites along the west coast of North America. As a first step toward a longer-term Public Participation GIS process in the study area, the training program includes lessons for beginner drone users related to flying drones, autonomous route planning and mapping, field safety, GIS analysis, image correction and processing, and Federal Aviation Administration (FAA) certification and regulations. Training our research partners and students, who are in most cases novice users, is the first step in a larger process to increase participation in a broader project for seagrass monitoring in our case study. While our training program originated in the United States, we discuss our experiences for research partners and communities around the globe to become more confident in introductory drone operations for basic science. In particular, our work targets novice users without a strong background in geographic research or remote sensing. Such training provides technical guidance on the implementation of a drone mapping program for coastal research, and synthesizes our approaches to provide broad guidance for using drones in support of a developing Public Participation GIS process. 
    more » « less
  4. Abstract

    Giant star-forming clumps (GSFCs) are areas of intensive star-formation that are commonly observed in high-redshift (z ≳ 1) galaxies but their formation and role in galaxy evolution remain unclear. Observations of low-redshift clumpy galaxy analogues are rare but the availability of wide-field galaxy survey data makes the detection of large clumpy galaxy samples much more feasible. Deep Learning (DL), and in particular Convolutional Neural Networks (CNNs), have been successfully applied to image classification tasks in astrophysical data analysis. However, one application of DL that remains relatively unexplored is that of automatically identifying and localizing specific objects or features in astrophysical imaging data. In this paper, we demonstrate the use of DL-based object detection models to localize GSFCs in astrophysical imaging data. We apply the Faster Region-based Convolutional Neural Network object detection framework (FRCNN) to identify GSFCs in low-redshift (z ≲ 0.3) galaxies. Unlike other studies, we train different FRCNN models on observational data that was collected by the Sloan Digital Sky Survey and labelled by volunteers from the citizen science project ‘Galaxy Zoo: Clump Scout’. The FRCNN model relies on a CNN component as a ‘backbone’ feature extractor. We show that CNNs, that have been pre-trained for image classification using astrophysical images, outperform those that have been pre-trained on terrestrial images. In particular, we compare a domain-specific CNN – ‘Zoobot’ – with a generic classification backbone and find that Zoobot achieves higher detection performance. Our final model is capable of producing GSFC detections with a completeness and purity of ≥0.8 while only being trained on ∼5000 galaxy images.

     
    more » « less
  5. Abstract

    We present the Citizen Science program Active Asteroids and describe discoveries stemming from our ongoing project. Our NASA Partner program is hosted on the Zooniverse online platform and launched on 2021 August 31, with the goal of engaging the community in the search for active asteroids—asteroids with comet-like tails or comae. We also set out to identify other unusual active solar system objects, such as active Centaurs, active quasi-Hilda asteroids (QHAs), and Jupiter-family comets (JFCs). Active objects are rare in large part because they are difficult to identify, so we ask volunteers to assist us in searching for active bodies in our collection of millions of images of known minor planets. We produced these cutout images with our project pipeline that makes use of publicly available Dark Energy Camera data. Since the project launch, roughly 8300 volunteers have scrutinized some 430,000 images to great effect, which we describe in this work. In total, we have identified previously unknown activity on 15 asteroids, plus one Centaur, that were thought to be asteroidal (i.e., inactive). Of the asteroids, we classify four as active QHAs, seven as JFCs, and four as active asteroids, consisting of one main-belt comet (MBC) and three MBC candidates. We also include our findings concerning known active objects that our program facilitated, an unanticipated avenue of scientific discovery. These include discovering activity occurring during an orbital epoch for which objects were not known to be active, and the reclassification of objects based on our dynamical analyses.

     
    more » « less