Mesoscale oceanographic features, including eddies, have the potential to alter productivity and other biogeochemical rates in the ocean. Here, we examine the microbiome of a cyclonic, Gulf Stream frontal eddy, with a distinct origin and environmental parameters compared to surrounding waters, in order to better understand the processes dominating microbial community assembly in the dynamic coastal ocean. Our microbiome-based approach identified the eddy as distinct from the surround Gulf Stream waters. The eddy-associated microbial community occupied a larger area than identified by temperature and salinity alone, increasing the predicted extent of eddy-associated biogeochemical processes. While the eddy formed on the continental shelf, after two weeks both environmental parameters and microbiome composition of the eddy were most similar to the Gulf Stream, suggesting the effect of environmental filtering on community assembly or physical mixing with adjacent Gulf Stream waters. In spite of the potential for eddy-driven upwelling to introduce nutrients and stimulate primary production, eddy surface waters exhibit lower chlorophyll
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Casotti, Raffaella (Ed.)
a along with a distinct and less even microbial community, compared to the Gulf Stream. At the population level, the eddy microbiome exhibited differences among the cyanobacteria (e.g. lowerTrichodesmium and higherProchlorococcus ) and in the heterotrophic alpha Proteobacteria (e.g. lower relative abundances of specific SAR11 phylotypes) versus the Gulf Stream. However, better delineation of the relative roles of processes driving eddy community assembly will likely require following the eddy and surrounding waters since inception. Additionally, sampling throughout the water column could better clarify the contribution of these mesoscale features to primary production and carbon export in the oceans.Free, publicly-accessible full text available November 9, 2024 -
Abstract High-resolution optical imaging systems are quickly becoming universal tools to characterize and quantify microbial diversity in marine ecosystems. Automated classification systems such as convolutional neural networks (CNNs) are often developed to identify species within the immense number of images (e.g., millions per month) collected. The goal of our study was to develop a CNN to classify phytoplankton images collected with an Imaging FlowCytobot for the Palmer Antarctica Long-Term Ecological Research project. A relatively small CNN (~2 million parameters) was developed and trained using a subset of manually identified images, resulting in an overall test accuracy, recall, and f1-score of 93.8, 93.7, and 93.7%, respectively, on a balanced dataset. However, the f1-score dropped to 46.5% when tested on a dataset of 10,269 new images drawn from the natural environment without balancing classes. This decrease is likely due to highly imbalanced class distributions dominated by smaller, less differentiable cells, high intraclass variance, and interclass morphological similarities of cells in naturally occurring phytoplankton assemblages. As a case study to illustrate the value of the model, it was used to predict taxonomic classifications (ranging from genus to class) of phytoplankton at Palmer Station, Antarctica, from late austral spring to early autumn in 2017‐2018 and 2018‐2019. The CNN was generally able to identify important seasonal dynamics such as the shift from large centric diatoms to small pennate diatoms in both years, which is thought to be driven by increases in glacial meltwater from January to March. This shift in particle size distribution has significant implications for the ecology and biogeochemistry of these waters. Moving forward, we hope to further increase the accuracy of our model to better characterize coastal phytoplankton communities threatened by rapidly changing environmental conditions.more » « less
-
null (Ed.)High-resolution optical imaging systems are quickly becoming universal tools to characterize and quantify microbial diversity in marine ecosystems. Automated detection systems such as convolutional neural networks (CNN) are often developed to identify the immense number of images collected. The goal of our study was to develop a CNN to classify phytoplankton images collected with an Imaging FlowCytobot for the Palmer Antarctica Long-Term Ecological Research project. A medium complexity CNN was developed using a subset of manually-identified images, resulting in an overall accuracy, recall, and f1-score of 93.8%, 93.7%, and 93.7%, respectively. The f1-score dropped to 46.5% when tested on a new random subset of 10,269 images, likely due to highly imbalanced class distributions, high intraclass variance, and interclass morphological similarities of cells in naturally occurring phytoplankton assemblages. Our model was then used to predict taxonomic classifications of phytoplankton at Palmer Station, Antarctica over 2017-2018 and 2018-2019 summer field seasons. The CNN was generally able to capture important seasonal dynamics such as the shift from large centric diatoms to small pennate diatoms in both seasons, which is thought to be driven by increases in glacial meltwater from January to March. Moving forward, we hope to further increase the accuracy of our model to better characterize coastal phytoplankton communities threatened by rapidly changing environmental conditions.more » « less
-
Abstract The flourishing application of drones within marine science provides more opportunity to conduct photogrammetric studies on large and varied populations of many different species. While these new platforms are increasing the size and availability of imagery datasets, established photogrammetry methods require considerable manual input, allowing individual bias in techniques to influence measurements, increasing error and magnifying the time required to apply these techniques.
Here, we introduce the next generation of photogrammetry methods utilizing a convolutional neural network to demonstrate the potential of a deep learning‐based photogrammetry system for automatic species identification and measurement. We then present the same data analysed using conventional techniques to validate our automatic methods.
Our results compare favorably across both techniques, correctly predicting whale species with 98% accuracy (57/58) for humpback whales, minke whales, and blue whales. Ninety percent of automated length measurements were within 5% of manual measurements, providing sufficient resolution to inform morphometric studies and establish size classes of whales automatically.
The results of this study indicate that deep learning techniques applied to survey programs that collect large archives of imagery may help researchers and managers move quickly past analytical bottlenecks and provide more time for abundance estimation, distributional research, and ecological assessments.
-
Abstract Marine megafauna are difficult to observe and count because many species travel widely and spend large amounts of time submerged. As such, management programmes seeking to conserve these species are often hampered by limited information about population levels.
Unoccupied aircraft systems (UAS, aka drones) provide a potentially useful technique for assessing marine animal populations, but a central challenge lies in analysing the vast amounts of data generated in the images or video acquired during each flight. Neural networks are emerging as a powerful tool for automating object detection across data domains and can be applied to UAS imagery to generate new population‐level insights. To explore the utility of these emerging technologies in a challenging field setting, we used neural networks to enumerate olive ridley turtles
Lepidochelys olivacea in drone images acquired during a mass‐nesting event on the coast of Ostional, Costa Rica.Results revealed substantial promise for this approach; specifically, our model detected 8% more turtles than manual counts while effectively reducing the manual validation burden from 2,971,554 to 44,822 image windows. Our detection pipeline was trained on a relatively small set of turtle examples (
N = 944), implying that this method can be easily bootstrapped for other applications, and is practical with real‐world UAS datasets.Our findings highlight the feasibility of combining UAS and neural networks to estimate population levels of diverse marine animals and suggest that the automation inherent in these techniques will soon permit monitoring over spatial and temporal scales that would previously have been impractical.