skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 7, 2026

Title: The Observed Availability of Data and Code in Earth Science and Artificial Intelligence
Abstract As the use of artificial intelligence (AI) has grown exponentially across a wide variety of science applications, it has become clear that it is critical to share data and code to facilitate reproducibility and innovation. AMS recently adopted the requirement that all papers include an availability statement. However, there is no requirement to ensure that the data and code are actually freely accessible during and after publication. Studies show that without this requirement, data is openly available in about a third to a half of journal articles. In this work, we surveyed two AMS journals, Artificial Intelligence for the Earth Systems (AIES) and Monthly Weather Review (MWR), and two non-AMS journals. These journals varied in primary topic foci, publisher, and requirement of an availability statement. We examined the extent to which data and code are stated to be available in all four journals, if readers could easily access the data and code, and what common justifications were provided for articles without open data or code. Our analysis found that roughly 75% of all articles that produced data and had an availability statement made at least some of their data openly available. Code was made openly available less frequently in three out of the four journals examined. Access was inhibited to data or code in approximately 15% of availability statement that contained at least one link. Finally, the most common justifications for not making data or code openly available referenced dataset size and restrictions of availability from non-co-author entities.  more » « less
Award ID(s):
2019758
PAR ID:
10596315
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
AMS
Date Published:
Journal Name:
Bulletin of the American Meteorological Society
ISSN:
0003-0007
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Cognitive load theory (CLT) has driven numerous empirical studies for over 30 years and is a major theme in many of the most cited articles published between 1988 and 2023. However, CLT articles have not been compared to other educational psychology research in terms of the research designs used and the extent to which recommendations for practice are justified. As Brady and colleagues found, a large percentage of the educational psychology articles reviewed were not experimental and yet frequently made specific recommendations from observational/correlational data. Therefore, in this review, CLT articles were examined with regard to the types of research methodology employed and whether recommendations for practice were justified. Across several educational psychology journals in 2020 and 2023, 16 articles were determined to directly test CLT. In contrast to other articles, which employed mostly observational methods, all but two of the CLT articles employed experimental or intervention designs. For the two CLT articles that were observational, recommendations for practice were not made. Reasons for the importance of experimental work are discussed. 
    more » « less
  2. There have been numerous efforts documenting the effects of open science in existing papers; however, these efforts typically only consider the author's analyses and supplemental materials from the papers. While understanding the current rate of open science adoption is important, it is also vital that we explore the factors that may encourage such adoption. One such factor may be publishing organizations setting open science requirements for submitted articles: encouraging researchers to adopt more rigorous reporting and research practices. For example, within the education technology discipline, theACM Conference on Learning @ Scale (L@S) has been promoting open science practices since 2018 through a Call For Papers statement. The purpose of this study was to replicate previous papers within the proceedings of L@S and compare the degree of open science adoption and robust reproducibility practices to other conferences in education technology without a statement on open science. Specifically, we examined 93 papers and documented the open science practices used. We then attempted to reproduce the results with invitation from authors to bolster the chance of success. Finally, we compared the overall adoption rates to those from other conferences in education technology. Although the overall responses to the survey were low, our cursory review suggests that researchers at L@S might be more familiar with open science practices compared to the researchers who published in the International Conference on Artificial Intelligence in Education (AIED) and the International Conference on Educational Data Mining (EDM): 13 of 28 AIED and EDM responses were unfamiliar with preregistrations and 7 unfamiliar with preprints, while only 2 of 7 L@S responses were unfamiliar with preregistrations and 0 with preprints. The overall adoption of open science practices at L@S was much lower with only 1% of papers providing open data, 5% providing open materials, and no papers had a preregistration. All openly accessible work can be found in an Open Science Framework project. 
    more » « less
  3. Scientists who perform major survival surgery on laboratory animals face a dual welfare and methodological challenge: how to choose surgical anesthetics and post-operative analgesics that will best control animal suffering, knowing that both pain and the drugs that manage pain can all affect research outcomes. Scientists who publish full descriptions of animal procedures allow critical and systematic reviews of data, demonstrate their adherence to animal welfare norms, and guide other scientists on how to conduct their own studies in the field. We investigated what information on animal pain management a reasonably diligent scientist might find in planning for a successful experiment. To explore how scientists in a range of fields describe their management of this ethical and methodological concern, we scored 400 scientific articles that included major animal survival surgeries as part of their experimental methods, for the completeness of information on anesthesia and analgesia. The 400 articles (250 accepted for publication pre-2011, and 150 in 2014–15, along with 174 articles they reference) included thoracotomies, craniotomies, gonadectomies, organ transplants, peripheral nerve injuries, spinal laminectomies and orthopedic procedures in dogs, primates, swine, mice, rats and other rodents. We scored articles for Publication Completeness (PC), which was any mention of use of anesthetics or analgesics; Analgesia Use (AU) which was any use of post-surgical analgesics, and Analgesia Completeness (a composite score comprising intra-operative analgesia, extended post-surgical analgesia, and use of multimodal analgesia). 338 of 400 articles were PC. 98 of these 338 were AU, with some mention of analgesia, while 240 of 338 mentioned anesthesia only but not postsurgical analgesia. Journals’ caliber, as measured by their 2013 Impact Factor, had no effect on PC or AU. We found no effect of whether a journal instructs authors to consult the ARRIVE publishing guidelines published in 2010 on PC or AC for the 150 mouse and rat articles in our 2014–15 dataset. None of the 302 articles that were silent about analgesic use included an explicit statement that analgesics were withheld, or a discussion of how pain management or untreated pain might affect results. We conclude that current scientific literature cannot be trusted to present full detail on use of animal anesthetics and analgesics. We report that publication guidelines focus more on other potential sources of bias in experimental results, under-appreciate the potential for pain and pain drugs to skew data, PLOS ONE | DOI:10.1371/journal.pone.0155001 May 12, 2016 1 / 24 a11111 OPEN ACCESS Citation: Carbone L, Austin J (2016) Pain and Laboratory Animals: Publication Practices for Better Data Reproducibility and Better Animal Welfare. PLoS ONE 11(5): e0155001. doi:10.1371/journal. pone.0155001 Editor: Chang-Qing Gao, Central South University, CHINA Received: December 29, 2015 Accepted: April 22, 2016 Published: May 12, 2016 Copyright: © 2016 Carbone, Austin. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability Statement: All relevant data are within the paper and its Supporting Information files. Authors may be contacted for further information. Funding: This study was funded by the United States National Science Foundation Division of Social and Economic Sciences. Award #1455838. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. and thus mostly treat pain management as solely an animal welfare concern, in the jurisdiction of animal care and use committees. At the same time, animal welfare regulations do not include guidance on publishing animal data, even though publication is an integral part of the cycle of research and can affect the welfare of animals in studies building on published work, leaving it to journals and authors to voluntarily decide what details of animal use to publish. We suggest that journals, scientists and animal welfare regulators should revise current guidelines and regulations, on treatment of pain and on transparent reporting of treatment of pain, to improve this dual welfare and data-quality deficiency. 
    more » « less
  4. Nucleotide sequence reagents underpin molecular techniques that have been applied across hundreds of thousands of publications. We have previously reported wrongly identified nucleotide sequence reagents in human research publications and described a semi-automated screening tool Seek & Blastn to fact-check their claimed status. We applied Seek & Blastn to screen >11,700 publications across five literature corpora, including all original publications in Gene from 2007 to 2018 and all original open-access publications in Oncology Reports from 2014 to 2018. After manually checking Seek & Blastn outputs for >3,400 human research articles, we identified 712 articles across 78 journals that described at least one wrongly identified nucleotide sequence. Verifying the claimed identities of >13,700 sequences highlighted 1,535 wrongly identified sequences, most of which were claimed targeting reagents for the analysis of 365 human protein-coding genes and 120 non-coding RNAs. The 712 problematic articles have received >17,000 citations, including citations by human clinical trials. Given our estimate that approximately one-quarter of problematic articles may misinform the future development of human therapies, urgent measures are required to address unreliable gene research articles. 
    more » « less
  5. null (Ed.)
    This article describes the motivation, design, and progress of the Journal of Open Source Software (JOSS). JOSS is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, JOSS addresses the dearth of rewards for key contributions to science made in the form of software. JOSS publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A JOSS article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a JOSS submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, JOSS does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, JOSS gives it a digital object identifier (DOI), deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their JOSS article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, JOSS published 111 articles, with more than 40 additional articles under review. JOSS is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative (OSI). 
    more » « less