skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2112606

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. IntroductionAdvancements in machine learning (ML) algorithms that make predictions from data without being explicitly programmed and the increased computational speeds of graphics processing units (GPUs) over the last decade have led to remarkable progress in the capabilities of ML. In many fields, including agriculture, this progress has outpaced the availability of sufficiently diverse and high-quality datasets, which now serve as a limiting factor. While many agricultural use cases appear feasible with current compute resources and ML algorithms, the lack of reusable hardware and software components, referred to as cyberinfrastructure (CI), for collecting, transmitting, cleaning, labeling, and training datasets is a major hindrance toward developing solutions to address agricultural use cases. This study focuses on addressing these challenges by exploring the collection, processing, and training of ML models using a multimodal dataset and providing a vision for agriculture-focused CI to accelerate innovation in the field. MethodsData were collected during the 2023 growing season from three agricultural research locations across Ohio. The dataset includes 1 terabyte (TB) of multimodal data, comprising Unmanned Aerial System (UAS) imagery (RGB and multispectral), as well as soil and weather sensor data. The two primary crops studied were corn and soybean, which are the state's most widely cultivated crops. The data collected and processed from this study were used to train ML models to make predictions of crop growth stage, soil moisture, and final yield. ResultsThe exercise of processing this dataset resulted in four CI components that can be used to provide higher accuracy predictions in the agricultural domain. These components included (1) a UAS imagery pipeline that reduced processing time and improved image quality over standard methods, (2) a tabular data pipeline that aggregated data from multiple sources and temporal resolutions and aligned it with a common temporal resolution, (3) an approach to adapting the model architecture for a vision transformer (ViT) that incorporates agricultural domain expertise, and (4) a data visualization prototype that was used to identify outliers and improve trust in the data. DiscussionFurther work will be aimed at maturing the CI components and implementing them on high performance computing (HPC). There are open questions as to how CI components like these can best be leveraged to serve the needs of the agricultural community to accelerate the development of ML applications in agriculture. 
    more » « less
    Free, publicly-accessible full text available January 23, 2026
  2. Abstract This analysis quantifies the network dynamics, geographic concentration, and disparities in perishable food supply networks for temperature-controlled food shipments in the United States. The United States forms the core of global food systems and produces more high-quality data for network analysis than most other countries. We use the 2017 US Census Commodity Flow Survey and other publicly available data to derive empirical results from the Food Flow Model for perishable meats and perishable prepared foods. We identify the top ten counties for perishable food distribution and find that the Los Angeles and Chicago regions support the greatest volumes of perishable food movements. States that largely exist outside national perishable food networks are Arizona, Michigan, Montana, North Dakota, Texas, and West Virginia. Our analysis of US data highlights the importance of certain counties, states, and regions in perishable food networks and suggests areas where interventions could improve systems’ functions by increasing access to markets for farmers and access to food for underserved communities, especially those in rural regions. 
    more » « less
  3. Abstract Drones are increasingly popular for collecting behaviour data of group‐living animals, offering inexpensive and minimally disruptive observation methods. Imagery collected by drones can be rapidly analysed using computer vision techniques to extract information, including behaviour classification, habitat analysis and identification of individual animals. While computer vision techniques can rapidly analyse drone‐collected data, the success of these analyses often depends on careful mission planning that considers downstream computational requirements—a critical factor frequently overlooked in current studies.We present a comprehensive summary of research in the growing AI‐driven animal ecology (ADAE) field, which integrates data collection with automated computational analysis focused on aerial imagery for collective animal behaviour studies. We systematically analyse current methodologies, technical challenges and emerging solutions in this field, from drone mission planning to behavioural inference. We illustrate computer vision pipelines that infer behaviour from drone imagery and present the computer vision tasks used for each step. We map specific computational tasks to their ecological applications, providing a framework for future research design.Our analysis reveals AI‐driven animal ecology studies for collective animal behaviour using drone imagery focus on detection and classification computer vision tasks. While convolutional neural networks (CNNs) remain dominant for detection and classification tasks, newer architectures like transformer‐based models and specialized video analysis networks (e.g. X3D, I3D, SlowFast) designed for temporal pattern recognition are gaining traction for pose estimation and behaviour inference. However, reported model accuracy varies widely by computer vision task, species, habitats and evaluation metrics, complicating meaningful comparisons between studies.Based on current trends, we conclude semi‐autonomous drone missions will be increasingly used to study collective animal behaviour. While manual drone operation remains prevalent, autonomous drone manoeuvrers, powered by edge AI, can scale and standardise collective animal behavioural studies while reducing the risk of disturbance and improving data quality. We propose guidelines for AI‐driven animal ecology drone studies adaptable to various computer vision tasks, species and habitats. This approach aims to collect high‐quality behaviour data while minimising disruption to the ecosystem. 
    more » « less
  4. Abstract Drones have become invaluable tools for studying animal behaviour in the wild, enabling researchers to collect aerial video data of group‐living animals. However, manually piloting drones to track animal groups consistently is challenging due to complex factors such as terrain, vegetation, group spread and movement patterns. The variability in manual piloting can result in unusable data for downstream behavioural analysis, making it difficult to collect standardized datasets for studying collective animal behaviour.To address these challenges, we present WildWing, a complete hardware and software open‐source unmanned aerial system (UAS) for autonomously collecting behavioural video data of group‐living animals. The system's main goal is to automate and standardize the collection of high‐quality aerial footage suitable for computer vision‐based behaviour analysis. We provide a novel navigation policy to autonomously track animal groups while maintaining optimal camera angles and distances for behavioural analysis, reducing the inconsistencies inherent in manual piloting.The complete WildWing system costs only $650 and incorporates drone hardware with custom software that integrates ecological knowledge into autonomous navigation decisions. The system produces 4 K resolution video at 30 fps while automatically maintaining appropriate distances and angles for behaviour analysis. We validate the system through field deployments tracking groups of Grevy's zebras, giraffes and Przewalski's horses at The Wilds conservation centre, demonstrating its ability to collect usable behavioural data consistently.By automating the data collection process, WildWing helps ensure consistent, high‐quality video data suitable for computer vision analysis of animal behaviour. This standardization is crucial for developing robust automated behaviour recognition systems to help researchers study and monitor wildlife populations at scale. The open‐source nature of WildWing makes autonomous behavioural data collection more accessible to researchers, enabling wider application of drone‐based behavioural monitoring in conservation and ecological research. 
    more » « less
    Free, publicly-accessible full text available March 10, 2026
  5. Abstract Artificial intelligence (AI) has the potential for vast societal and economic gain; yet applications are developed in a largely ad hoc manner, lacking coherent, standardized, modular, and reusable infrastructures. The NSF‐funded Intelligent CyberInfrastructure with Computational Learning in the Environment AI Institute (“ICICLE”) aims to fundamentally advanceedge‐to‐center, AI‐as‐a‐Service, achieved through intelligent cyberinfrastructure (CI) that spans the edge‐cloud‐HPCcomputing continuum,plug‐and‐playnext‐generation AI and intelligent CI services, and a commitment to design for broad accessibility and widespread benefit. This design is foundational to the institute's commitment to democratizing AI. The institute's CI activities are informed by three high‐impact domains:animal ecology,digital agriculture, andsmart foodsheds. The institute's workforce development and broadening participation in computing efforts reinforce the institute's commitment todemocratizing AI. ICICLE seeks to serve asthe national nexus for AI and intelligent CI, and welcomes engagement across its wide set of programs. 
    more » « less
  6. Abstract Task‐incremental learning (Task‐IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learning tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under‐explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in‐depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time. 
    more » « less
  7. Free, publicly-accessible full text available September 23, 2026
  8. Free, publicly-accessible full text available September 9, 2026
  9. Free, publicly-accessible full text available September 2, 2026
  10. Free, publicly-accessible full text available July 18, 2026