skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 8, 2026

Title: Using CODAP, Datasets, and AI to Study Seabird Restoration
The study of seabirds can provide a fascinating subject for the integration of datasets and data practices with scientific phenomena. Workshop participants will examine trends and correlations in several decades of National Audubon Society data about puffins, using an accessible open-source education data tool (CODAP). They will examine relationships among variables including sea surface temperature, fish in the puffin diet, fledgling weight, and survival to breeding age. They will use present-day data from puffin webcams and sound recordings to supplement their work with historical datasets. They will train an artificial intelligence (AI) system to differentiate puffin vocalizations from those of other birds and puffin images from other bird images.  more » « less
Award ID(s):
2241777
PAR ID:
10628994
Author(s) / Creator(s):
;
Publisher / Repository:
National Science Teachers Association National Conference, New Orleans, LA, November 2024
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Computer vision is a data hungry field. Researchers and practitioners who work on human-centric computer vision, like facial recognition, emphasize the necessity of vast amounts of data for more robust and accurate models. Humans are seen as a data resource which can be converted into datasets. The necessity of data has led to a proliferation of gathering data from easily available sources, including public data from the web. Yet the use of public data has significant ethical implications for the human subjects in datasets. We bridge academic conversations on the ethics of using publicly obtained data with concerns about privacy and agency associated with computer vision applications. Specifically, we examine how practices of dataset construction from public data-not only from websites, but also from public settings and public records-make it extremely difficult for human subjects to trace their images as they are collected, converted into datasets, distributed for use, and, in some cases, retracted. We discuss two interconnected barriers current data practices present to providing an ethics of traceability for human subjects: awareness and control. We conclude with key intervention points for enabling traceability for data subjects. We also offer suggestions for an improved ethics of traceability to enable both awareness and control for individual subjects in dataset curation practices. 
    more » « less
  2. While bees are critical to sustaining a large proportion of global food production, as well as pollinating both wild and cultivated plants, they are decreasing in both numbers and diversity. Our understanding of the factors driving these declines is limited, in part, because we lack sufficient data on the distribution of bee species to predict changes in their geographic range under climate change scenarios. Additionally lacking is adequate data on the behavioral and anatomical traits that may make bees either vulnerable or resilient to human-induced environmental changes, such as habitat loss and climate change. Fortunately, a wealth of associated attributes can be extracted from the specimens deposited in natural history collections for over 100 years. Extending Anthophila Research Through Image and Trait Digitization (Big-Bee) is a newly funded US National Science Foundation Advancing Digitization of Biodiversity Collections project. Over the course of three years, we will create over one million high-resolution 2D and 3D images of bee specimens (Fig. 1), representing over 5,000 worldwide bee species, including most of the major pollinating species. We will also develop tools to measure bee traits from images and generate comprehensive bee trait and image datasets to measure changes through time. The Big-Bee network of participating institutions includes 13 US institutions (Fig. 2) and partnerships with US government agencies. We will develop novel mechanisms for sharing image datasets and datasets of bee traits that will be available through an open, Symbiota-Light (Gilbert et al. 2020) data portal called the Bee Library. In addition, biotic interaction and species association data will be shared via Global Biotic Interactions (Poelen et al. 2014). The Big-Bee project will engage the public in research through community science via crowdsourcing trait measurements and data transcription from images using Notes from Nature (Hill et al. 2012). Training and professional development for natural history collection staff, researchers, and university students in data science will be provided through the creation and implementation of workshops focusing on bee traits and species identification. We are also planning a short, artistic college radio segment called "the Buzz" to get people excited about bees, biodiversity, and the wonders of our natural world. 
    more » « less
  3. Abstract High-throughput cell proliferation assays to quantify drug-response are becoming increasingly common and powerful with the emergence of improved automation and multi-time point analysis methods. However, pipelines for analysis of these datasets that provide reproducible, efficient, and interactive visualization and interpretation are sorely lacking. To address this need, we introduce Thunor, an open-source software platform to manage, analyze, and visualize large, dose-dependent cell proliferation datasets. Thunor supports both end-point and time-based proliferation assays as input. It provides a simple, user-friendly interface with interactive plots and publication-quality images of cell proliferation time courses, dose–response curves, and derived dose–response metrics, e.g. IC50, including across datasets or grouped by tags. Tags are categorical labels for cell lines and drugs, used for aggregation, visualization and statistical analysis, e.g. cell line mutation or drug class/target pathway. A graphical plate map tool is included to facilitate plate annotation with cell lines, drugs and concentrations upon data upload. Datasets can be shared with other users via point-and-click access control. We demonstrate the utility of Thunor to examine and gain insight from two large drug response datasets: a large, publicly available cell viability database and an in-house, high-throughput proliferation rate dataset. Thunor is available from www.thunor.net. 
    more » « less
  4. While deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexpected situations. This study investigates model sensitivity to domain shifts, such as data sampled from different hospitals or confounded by demographic variables like sex and race, focusing on chest X-rays and skin lesion images. The key finding is that existing visual backbones lack an appropriate prior for reliable generalization in these settings. Inspired by medical training, the authors propose incorporating explicit medical knowledge communicated in natural language into deep networks. They introduce Knowledge-enhanced Bottlenecks (KnoBo), a class of concept bottleneck models that integrate knowledge priors, enabling reasoning with clinically relevant factors found in medical textbooks or PubMed. KnoBo utilizes retrieval-augmented language models to design an appropriate concept space, paired with an automatic training procedure for recognizing these concepts. Evaluations across 20 datasets demonstrate that KnoBo outperforms fine-tuned models on confounded datasets by 32.4% on average. Additionally, PubMed is identified as a promising resource for enhancing model robustness to domain shifts, outperforming other resources in both information diversity and prediction performance. 
    more » « less
  5. Abstract As technology advances, Human-Robot Interaction (HRI) is boosting overall system efficiency and productivity. However, allowing robots to be present closely with humans will inevitably put higher demands on precise human motion tracking and prediction. Datasets that contain both humans and robots operating in the shared space are receiving growing attention as they may facilitate a variety of robotics and human-systems research. Datasets that track HRI with rich information other than video images during daily activities are rarely seen. In this paper, we introduce a novel dataset that focuses on social navigation between humans and robots in a future-oriented Wholesale and Retail Trade (WRT) environment (https://uf-retail-cobot-dataset.github.io/). Eight participants performed the tasks that are commonly undertaken by consumers and retail workers. More than 260 minutes of data were collected, including robot and human trajectories, human full-body motion capture, eye gaze directions, and other contextual information. Comprehensive descriptions of each category of data stream, as well as potential use cases are included. Furthermore, analysis with multiple data sources and future directions are discussed. 
    more » « less