Japanese rhinoceros beetle (Trypoxylus dichotomus) males have exaggerated horns used to compete for feeding territories. Larger males with larger horns generally win competitions, providing them the potential to mate with female beetles. However, agonistic interactions between males appear to begin with an initial assessment ritual, which often results in one beetle retreating without escalating to physical combat. It is unknown what information competing beetles may be able to communicate to each other during the assessment ritual. In many insect species, chemical signals can carry a range of information, including social position, nutritional state, morphology, and sex. Specifically, cuticular hydrocarbons (CHCs), which are waxes excreted on the surface of insect exoskeletons, are responsible for diverse forms of chemical communication in insects. Here, we asked whether CHCs in rhinoceros beetles carry information about body size and sex that males could use during assessment behavior. The CHCs of male and female Japanese rhinoceros beetles were extracted by washing the elytra of deceased beetles in hexanes. Samples were then analyzed through gas chromatography-mass spectroscopy (GCMS). Multivariate analysis of the composition of hydrocarbons observed in GCMS spectra revealed patterns associated with sex and multiple body size components in males (horn length, pronotum width, elytra length). We suggest that male rhinoceros beetles could communicate body size information through CHCs, explaining the decision-making behind escalating to combat and retreating behaviors after the initial assessment. We also suggest that male rhinoceros beetles could identify a conspecific's sex through analysis of CHCs.
more »
« less
2018-NEON-beetles
This dataset is composed of a collection of 577 images of ethanol-preserved beetles collected at NEON sites in 2018. Each image contains a collection of beetles of the same species from a single plot at the labeled site. In 2022, they were arranged on a lattice and photographed; the elytra length and width were then annotated for each individual in each image using Zooniverse. The individual images were segemented out based on scaling the elytra measurement pixel coordinates to the full-size images (more information on this process is available on the Imageomics/2018-NEON-beetles-processing repository).
more »
« less
- Award ID(s):
- 2301322
- PAR ID:
- 10639887
- Publisher / Repository:
- Hugging Face
- Date Published:
- Edition / Version:
- 7b3731d
- Subject(s) / Keyword(s):
- Carabid, National Ecological Observatory Network, Trait
- Format(s):
- Medium: X Size: 5.4GB Other: .jpg; .csv
- Size(s):
- 5.4GB
- Institution:
- University of Maine
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Insect populations are changing rapidly, and monitoring these changes is essential for understanding the causes and consequences of such shifts. However, large‐scale insect identification projects are time‐consuming and expensive when done solely by human identifiers. Machine learning offers a possible solution to help collect insect data quickly and efficiently.Here, we outline a methodology for training classification models to identify pitfall trap‐collected insects from image data and then apply the method to identify ground beetles (Carabidae). All beetles were collected by the National Ecological Observatory Network (NEON), a continental scale ecological monitoring project with sites across the United States. We describe the procedures for image collection, image data extraction, data preparation, and model training, and compare the performance of five machine learning algorithms and two classification methods (hierarchical vs. single‐level) identifying ground beetles from the species to subfamily level. All models were trained using pre‐extracted feature vectors, not raw image data. Our methodology allows for data to be extracted from multiple individuals within the same image thus enhancing time efficiency, utilizes relatively simple models that allow for direct assessment of model performance, and can be performed on relatively small datasets.The best performing algorithm, linear discriminant analysis (LDA), reached an accuracy of 84.6% at the species level when naively identifying species, which was further increased to >95% when classifications were limited by known local species pools. Model performance was negatively correlated with taxonomic specificity, with the LDA model reaching an accuracy of ~99% at the subfamily level. When classifying carabid species not included in the training dataset at higher taxonomic levels species, the models performed significantly better than if classifications were made randomly. We also observed greater performance when classifications were made using the hierarchical classification method compared to the single‐level classification method at higher taxonomic levels.The general methodology outlined here serves as a proof‐of‐concept for classifying pitfall trap‐collected organisms using machine learning algorithms, and the image data extraction methodology may be used for nonmachine learning uses. We propose that integration of machine learning in large‐scale identification pipelines will increase efficiency and lead to a greater flow of insect macroecological data, with the potential to be expanded for use with other noninsect taxa.more » « less
-
The South American palm weevil, Rhynchophorus palmarum (Coleoptera: Curculionidae), established in San Diego County, California, USA sometime around 2014. Attached to the motile adults of this destructive palm pest, we identified three species of uropodine mites (Parasitiformes: Uropodina), Centrouropoda n. sp., Dinychus n. sp. and Fuscuropoda marginata. Two of these species, Centrouropoda n. sp. and Dinychus n. sp. are recorded for the first time in the USA and were likely introduced by R. palmarum. Several species of mites, primarily of Uropodina, have previously been recorded as phoretic on Rhynchophorus spp. In this study, we examined 3,035 adult R. palmarum trapped over a 2.5-year period, July 2016 to December 2018, and documented the presence of and species composition of phoretic mites and their relationship with weevil morphometrics (i.e., pronotum length and width). The presence and species composition of mites on weevil body parts changed over the survey period. No mites were found under weevil elytra in 2016 and mite prevalence under elytra increased over 2017–2018 due to an increased abundance of Centrouropoda n. sp per individual beetle. Mite occurrence levels were significantly correlated with reduced pronotum widths of male weevils only. The significance of this finding on male weevil fitness is unknown. Potential implications of phoretic mites on aspects of the invasion biology of R. palmarum are discussed.more » « less
-
Kajtoch, Łukasz (Ed.)This study presents an initial model for bark beetle identification, serving as a foundational step toward developing a fully functional and practical identification tool. Bark beetles are known for extensive damage to forests globally, as well as for uniform and homoplastic morphology which poses identification challenges. Utilizing a MaxViT-based deep learning backbone which utilizes local and global attention to classify bark beetles down to the genus level from images containing multiple beetles. The methodology involves a process of image collection, preparation, and model training, leveraging pre-classified beetle species to ensure accuracy and reliability. The model's F1 score estimates of 0.99 and 1.0 indicates a strong ability to accurately classify genera in the collected data, including those previously unknown to the model. This makes it a valuable first step towards building a tool for applications in forest management and ecological research. While the current model distinguishes among 12 genera, further refinement and additional data will be necessary to achieve reliable species-level identification, which is particularly important for detecting new invasive species. Despite the controlled conditions of image collection and potential challenges in real-world application, this study provides the first model capable of identifying the bark beetle genera, and by far the largest training set of images for any comparable insect group. We also designed a function that reports if a species appears to be unknown. Further research is suggested to enhance the model's generalization capabilities and scalability, emphasizing the integration of advanced machine learning techniques for improved species classification and the detection of invasive or undescribed species.more » « less
-
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset of hyperspectral sky images captured by a Resonon PIKA XC2 camera. The camera records images using 462 spectral bands, ranging from 400 to 1000 nm, with a spectral resolution of 1.9 nm. Our preliminary/unlabeled dataset comprised 33 parent hyperspectral images (HSI), each a substantial unlabeled image measuring 4402-by-1600 pixels. With the meteorological expertise within our team, we manually labeled pixels by extracting 10 to 20 sample patches from each parent image, each patch consisting of a 50-by-50 pixel field. This process yielded a collection of 444 patches, each categorically labeled into one of seven cloud and sky condition categories. To embed the inherent data structure while classifying individual pixels, we introduced an innovative technique to boost classification accuracy by incorporating patch-specific information into each pixel’s feature vector. The posterior probabilities generated by these classifiers, which capture the unique attributes of each patch, were subsequently concatenated with the pixel’s original spectral data to form an augmented feature vector. We then applied a final classifier to map the augmented vectors to the seven cloud/sky categories. The results compared favorably to the baseline model devoid of patch-origin embedding, showing that incorporating the spatial context along with the spectral information inherent in hyperspectral images enhances the classification accuracy in hyperspectral cloud classification. The dataset is available on IEEE DataPort.more » « less
An official website of the United States government
