skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Labeling Intercomparison of Retrogressive Thaw Slumps by a Diverse Group of Domain Experts
ABSTRACT Deep‐learning (DL) models have become increasingly beneficial for the detection of retrogressive thaw slumps (RTS) in the permafrost domain. However, comparing accuracy metrics is challenging due to unstandardized labeling guidelines. To address this, we conducted an experiment with 12 international domain experts from a broad range of scientific backgrounds. Using 3 m PlanetScope multispectral imagery, they digitized RTS footprints in two sites. We evaluated label uncertainty by comparing manually outlined RTS labels using Intersection‐over‐Union (IoU) and F1 metrics. At the Canadian Peel Plateau site, we see good agreement, particularly in the active parts of RTS. Differences were observed in the interpretation of the debris tongue and the stable vegetated sections of RTS. At the Russian Bykovsky site, we observed a larger mismatch. Here, the same differences were documented, but several participants mistakenly identified non‐RTS features. This emphasizes the importance of site‐specific knowledge for reliable label creation. The experiment highlights the need for standardized labeling procedures and definition of their scientific purpose. The most similar expert labels outperformed the accuracy metrics reported in the literature, highlighting human labeling capabilities with proper training, site knowledge, and clear guidelines. These findings lay the groundwork for DL‐based RTS monitoring in the pan‐Arctic.  more » « less
Award ID(s):
1928048
PAR ID:
10549849
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Permafrost and Periglacial Processes
Volume:
36
Issue:
1
ISSN:
1045-6740
Format(s):
Medium: X Size: p. 83-92
Size(s):
p. 83-92
Sponsoring Org:
National Science Foundation
More Like this
  1. Scientific literature analysis needs fine-grained named entity recognition (NER) to provide a wide range of information for scientific discovery. For example, chemistry research needs to study dozens to hundreds of distinct, fine-grained entity types, making consistent and accurate annotation difficult even for crowds of domain experts. On the other hand, domain-specific ontologies and knowledge bases (KBs) can be easily accessed, constructed, or integrated, which makes distant supervision realistic for fine-grained chemistry NER. In distant supervision, training labels are generated by matching mentions in a document with the concepts in the knowledge bases (KBs). However, this kind of KB-matching suffers from two major challenges: incomplete annotation and noisy annotation. We propose ChemNER, an ontology-guided, distantly-supervised method for fine-grained chemistry NER to tackle these challenges. It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation. It significantly improves the distant label generation for the subsequent sequence labeling model training. We also provide an expert-labeled, chemistry NER dataset with 62 fine-grained chemistry types (e.g., chemical compounds and chemical reactions). Experimental results show that ChemNER is highly effective, outperforming substantially the state-of-the-art NER methods (with .25 absolute F1 score improvement). 
    more » « less
  2. Most research studies on deep learning (DL) applied to the physical layer of wireless communication do not put forward the critical role of the accuracy-generalization trade-off in developing and evaluating practical algorithms. To highlight the disadvantage of this common practice, we revisit a data decoding example from one of the first papers introducing DL-based end-to-end wireless communication systems to the research community and promoting the use of artificial intelligence (AI)/DL for the wireless physical layer. We then put forward two key trade-offs in designing DL models for communication, namely, accuracy versus generalization and compression versus latency. We discuss their relevance in the context of wireless communications use cases using emerging DL models, including large language models (LLMs). Finally, we summarize our proposed evaluation guidelines to enhance the research impact of DL on wireless communications. These guidelines are an attempt to reconcile the empirical nature of DL research with the rigorous requirement metrics of wireless communications systems. 
    more » « less
  3. null (Ed.)
    Open Source Software projects add labels to open issues to help contributors choose tasks. However, manually labeling issues is time-consuming and error-prone. Current automatic approaches for creating labels are mostly limited to classifying issues as a bug/non-bug. In this paper, we investigate the feasibility and relevance of labeling issues with the domain of the APIs required to complete the tasks. We leverage the issues' description and the project history to build prediction models, which resulted in precision up to 82% and recall up to 97.8%. We also ran a user study (n=74) to assess these labels' relevancy to potential contributors. The results show that the labels were useful to participants in choosing tasks, and the API-domain labels were selected more often than the existing architecture-based labels. Our results can inspire the creation of tools to automatically label issues, helping developers to find tasks that better match their skills. 
    more » « less
  4. Abstract Large scale analysis of source code, and in particular scientific source code, holds the promise of better understanding the data science process, identifying analytical best practices, and providing insights to the builders of scientific toolkits. However, large corpora have remained unanalyzed in depth, as descriptive labels are absent and require expert domain knowledge to generate. We propose a novel weakly supervised transformer-based architecture for computing joint representations of code from both abstract syntax trees and surrounding natural language comments. We then evaluate the model on a new classification task for labeling computational notebook cells as stages in the data analysis process from data import to wrangling, exploration, modeling, and evaluation. We show that our model, leveraging only easily-available weak supervision, achieves a 38% increase in accuracy over expert-supplied heuristics and outperforms a suite of baselines. Our model enables us to examine a set of 118,000 Jupyter Notebooks to uncover common data analysis patterns. Focusing on notebooks with relationships to academic articles, we conduct the largest study of scientific code to date and find that notebooks which devote an higher fraction of code to the typically labor-intensive process of wrangling data in expectation exhibit decreased citation counts for corresponding papers. We also show significant differences between academic and non-academic notebooks, including that academic notebooks devote substantially more code to wrangling and exploring data, and less on modeling. 
    more » « less
  5. Two general approaches are common for evaluating automatically generated labels in topic modeling: direct human assessment; or performance metrics that can be calculated without, but still correlate with, human assessment. However, both approaches implicitly assume that the quality of a topic label is single-dimensional. In contrast, this paper provides evidence that human assessments about the quality of topic labels consist of multiple latent dimensions. This evidence comes from human assessments of four simple labeling techniques. For each label, study participants responded to several items asking them to assess each label according to a variety of different criteria. Exploratory factor analysis shows that these human assessments of labeling quality have a two-factor latent structure. Subsequent analysis demonstrates that this multi-item, two-factor assessment can reveal nuances that would be missed using either a single-item human assessment of perceived label quality or established performance metrics. The paper concludes by sug- gesting future directions for the development of human-centered approaches to evaluating NLP and ML systems more broadly. 
    more » « less