skip to main content

This content will become publicly available on August 1, 2023

Title: Recruiting, paying, and evaluating the experiences of civic scientists studying urban park usage during the beginning of the COVID-19 pandemic
This paper describes an attempt to utilize paid citizen science in a research project that documented urban park usage during the early stages of the COVID-19 pandemic in two U.S. cities. Strategies used by the research team to recruit, pay, and evaluate the experiences of the 43 citizen scientists are discussed alongside key challenges in contemporary citizen science. A literature review suggests that successful citizen science projects foster diverse and inclusive participation; develop appropriate ways to compensate citizen scientists for their work; maximize opportunities for participant learning; and ensure high standards for data quality. In this case study, the selection process proved successful in employing economically vulnerable individuals, though the citizen scientist participants were disproportionately female, young, White, non-Hispanic, single, and college educated relative to the communities studied. The participants reported that the financial compensation provided by the study, similar in amount to the economic stimulus checks distributed simultaneously by the Federal government, were reasonable given the workload, and many used it to cover basic household needs. Though the study took place in a period of high economic risk, and more than 80% of the participants had never participated in a scientific study, the experience was rated overwhelmingly positive. Participants more » reported that the work provided stress relief, indicated they would consider participating in similar research in the future. Despite the vast majority never having engaged in most park stewardship activities, they expressed interest in learning more about park usage, mask usage in public spaces, and socio-economic trends in relation to COVID-19. Though there were some minor challenges in data collection, data quality was sufficient to publish the topical results in a peer-reviewed companion paper. Key insights on the logistical constraints faced by the research team are highlighted throughout the paper to advance the case for paid citizen science. « less
; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Frontiers in Sustainable Cities
Sponsoring Org:
National Science Foundation
More Like this
  1. Urban parks and green spaces provide a wide range of ecosystem services, including social interaction and stress reduction. When COVID-19 closed schools and businesses and restricted social gatherings, parks became one of the few places that urban residents were permitted to visit outside their homes. With a focus on Philadelphia, PA and New York City, NY, this paper presents a snapshot of the park usage during the early phases of the pandemic. Forty-three Civic Scientists were employed by the research team to observe usage in 22 different parks selected to represent low and high social vulnerability, and low, medium, and high population density. Despite speculation that parks could contribute to the spread of COVID-19, no strong correlation was found between the number of confirmed COVID-19 cases in adjacent zip codes and the number of park users. High social vulnerability neighborhoods were associated with a significantly higher number of COVID-19 cases ([Formula: see text]). In addition, no significant difference in the number of park users was detected between parks in high and low vulnerability neighborhoods. The number of park users did significantly increase with population density in both cities ([Formula: see text]), though usage varied greatly by park. Males were moremore »frequently observed than females in parks in both high vulnerability and high-density neighborhoods. Although high vulnerability neighborhoods reported higher COVID-19 cases, residents of Philadelphia and New York City appear to have been undeterred from visiting parks during this phase of the pandemic. This snapshot study provides no evidence to support closing parks during the pandemic. To the contrary, people continued to visit parks throughout the study, underscoring their evident value as respite for urban residents during the early phases of the pandemic.« less
  2. The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors:,,,, ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert datamore »scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge.« less
  3. Wright College, an urban open-access community college, independently accredited within a larger community college system, is a federally recognized Hispanic-Serving Institution (HSI) with the largest community college enrollment of Hispanic students in its state. In 2018, Wright College received an inaugural National Science Foundation-Hispanic Serving Institution (NSF:HSI) research project grant “Building Capacity: Building Bridges into Engineering and Computer Science”. The project's overall goals are to increase underrepresented students pursuing an associate degree (AES) in engineering and computer science and streamline two transitions: high school to community college and 2-year to 4-year institutions. Through the grant, Wright College created a holistic and programmatic framework that examines and correlates engineering students' self-efficacy (the belief that students will succeed as engineers) and a sense of belonging with student success. The project focuses on Near-STEM ready students (students who need up to four semesters of math remediation before moving into Calculus 1). The project assesses qualitative and quantitative outcomes through surveys and case study interviews supplemented with retention, persistence, transfer, associate and bachelor's degree completion rates, and time for degree completion. The key research approach is to correlate student success data with self-efficacy and belonging measures. Outcomes and Impacts Three years into the project,more »Wright College Engineering and Computer Science Program was able to: • Develop and implement the Contextualized Summer Bridge with a total of 132 Near-STEM participants. One hundred twenty-seven (127) completed; 100% who completed the Bridge eliminated up to two years of math remediation, and 54% were directly placed in Calculus 1. All successful participants were placed in different engineering pathways, and 11 students completed Associate in Engineering Science (AES) and transferred after two years from the Bridge. • Increase enrollment by 940% (25 to 235 students) • Retain 93% of first-year students (Fall to fall retention). Seventy-five percent (75%) transferred after two years from initial enrollment. • Develop a holistic and programmatic approach for transfer model, thus increasing partnerships with 4-year transfer institutions resulting in the expansion of guaranteed/dual admissions programs with scholarships, paid research experience, dual advising, and students transferring as juniors. • Increase diversity at Wright College by bridging the academic gap for Near-STEM ready students. • Increase self-efficacy and belonging among all Program participants. • Increase institutionalized collaborations responsible for Wright College's new designation as the Center of Excellence for Engineering and Computer Science. • Increase enrollment, retention, and transfer of Hispanic students instrumental for Wright College Seal of Excelencia recognition. Lessons Learned The framework established during the first year of the grant overwhelmingly increased belonging and self-efficacy correlated with robust outcomes. However, the COVID-19 pandemic provided new challenges and opportunities in the second and third years of the grant. While adaptations were made to compensate for the negative impact of the pandemic, the face-to-face interactions were critical to support students’ entry into pathways and persistence within the Program.« less
  4. Scientists who perform major survival surgery on laboratory animals face a dual welfare and methodological challenge: how to choose surgical anesthetics and post-operative analgesics that will best control animal suffering, knowing that both pain and the drugs that manage pain can all affect research outcomes. Scientists who publish full descriptions of animal procedures allow critical and systematic reviews of data, demonstrate their adherence to animal welfare norms, and guide other scientists on how to conduct their own studies in the field. We investigated what information on animal pain management a reasonably diligent scientist might find in planning for a successful experiment. To explore how scientists in a range of fields describe their management of this ethical and methodological concern, we scored 400 scientific articles that included major animal survival surgeries as part of their experimental methods, for the completeness of information on anesthesia and analgesia. The 400 articles (250 accepted for publication pre-2011, and 150 in 2014–15, along with 174 articles they reference) included thoracotomies, craniotomies, gonadectomies, organ transplants, peripheral nerve injuries, spinal laminectomies and orthopedic procedures in dogs, primates, swine, mice, rats and other rodents. We scored articles for Publication Completeness (PC), which was any mention of use ofmore »anesthetics or analgesics; Analgesia Use (AU) which was any use of post-surgical analgesics, and Analgesia Completeness (a composite score comprising intra-operative analgesia, extended post-surgical analgesia, and use of multimodal analgesia). 338 of 400 articles were PC. 98 of these 338 were AU, with some mention of analgesia, while 240 of 338 mentioned anesthesia only but not postsurgical analgesia. Journals’ caliber, as measured by their 2013 Impact Factor, had no effect on PC or AU. We found no effect of whether a journal instructs authors to consult the ARRIVE publishing guidelines published in 2010 on PC or AC for the 150 mouse and rat articles in our 2014–15 dataset. None of the 302 articles that were silent about analgesic use included an explicit statement that analgesics were withheld, or a discussion of how pain management or untreated pain might affect results. We conclude that current scientific literature cannot be trusted to present full detail on use of animal anesthetics and analgesics. We report that publication guidelines focus more on other potential sources of bias in experimental results, under-appreciate the potential for pain and pain drugs to skew data, PLOS ONE | DOI:10.1371/journal.pone.0155001 May 12, 2016 1 / 24 a11111 OPEN ACCESS Citation: Carbone L, Austin J (2016) Pain and Laboratory Animals: Publication Practices for Better Data Reproducibility and Better Animal Welfare. PLoS ONE 11(5): e0155001. doi:10.1371/journal. pone.0155001 Editor: Chang-Qing Gao, Central South University, CHINA Received: December 29, 2015 Accepted: April 22, 2016 Published: May 12, 2016 Copyright: © 2016 Carbone, Austin. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability Statement: All relevant data are within the paper and its Supporting Information files. Authors may be contacted for further information. Funding: This study was funded by the United States National Science Foundation Division of Social and Economic Sciences. Award #1455838. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. and thus mostly treat pain management as solely an animal welfare concern, in the jurisdiction of animal care and use committees. At the same time, animal welfare regulations do not include guidance on publishing animal data, even though publication is an integral part of the cycle of research and can affect the welfare of animals in studies building on published work, leaving it to journals and authors to voluntarily decide what details of animal use to publish. We suggest that journals, scientists and animal welfare regulators should revise current guidelines and regulations, on treatment of pain and on transparent reporting of treatment of pain, to improve this dual welfare and data-quality deficiency.« less
  5. COVID-19 resulted in health and logistical challenges for many sectors of the American economy, including the trucking industry. This study examined how the pandemic impacted the trucking industry, focused on the pandemic’s impacts on company operations, health, and stress of trucking industry employees. Data were collected from three sources: surveys, focus groups, and social media posts. Individuals at multiple organizational levels of trucking companies (i.e., supervisors, upper-level management, and drivers) completed an online survey and participated in online focus groups. Data from focus groups were coded using a thematic analysis approach. Publicly available social media posts from Twitter were analyzed using a sentiment analysis framework to assess changes in public sentiment about the trucking industry pre- and during-COVID-19. Two themes emerged from the focus groups: (1) trucking company business strategies and adaptations and (2) truck driver experiences and workplace safety. Participants reported supply chain disruptions and new consumer buying trends as having larger industry-wide impacts. Company adaptability emerged due to freight variability, leading organizations to pivot business models and create solutions to reduce operational costs. Companies responded to COVID-19 by accommodating employees’ concerns and implementing safety measures. Truck drivers noted an increase in positive public perception of truck drivers, butmore »job quality factors worsened due to closed amenities and decreased social interaction. Social media sentiment analysis also illustrated an increase in positive public sentiment towards the trucking industry during COVID-19. The pandemic resulted in multi-level economic, health, and social impacts on the trucking industry, which included economic impacts on companies and economic, social and health impacts on employees within the industry levels. Further research can expand on this study to provide an understanding of the long-term impacts of the pandemic on the trucking industry companies within the industry and segments of the trucking industry workforce.« less