skip to main content


Title: Planning for Fish Net Inspection with an Autonomous OSV
In aquaculture farming, escaping fish can lead to large economic losses and major local environmental impacts. As such, the careful inspection of fishnets for breaks or holes presents an important problem. In this paper, we extend upon our previous work in the design of an omnidirectional surface vehicle (OSV) for fishnet inspection by incorporating AI (artificial intelligence) planning methods. For large aquaculture sites, closely inspecting the surface of the net may lead to inefficient performance as holes may occur infrequently. We leverage a hierarchical task network planner to construct plans on when to evaluate a net closely and when to evaluate a net at a distance in order to survey the net with a wider range. Simulation results are provided.  more » « less
Award ID(s):
1849228 1828678 1934836
NSF-PAR ID:
10212090
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2020 International Conference on System Science and Engineering (ICSSE)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. To keep global surface warming below 1.5°C by 2100, the portfolio of cost-effective CDR technologies must expand. To evaluate the potential of macroalgae CDR, we developed a kelp aquaculture bio-techno-economic model in which large quantities of kelp would be farmed at an offshore site, transported to a deep water “sink site”, and then deposited below the sequestration horizon (1,000 m). We estimated the costs and associated emissions of nursery production, permitting, farm construction, ocean cultivation, biomass transport, and Monitoring, Reporting, and Verification (MRV) for a 1,000 acre (405 ha) “baseline” project located in the Gulf of Maine, USA. The baseline kelp CDR model applies current systems of kelp cultivation to deep water (100 m) exposed sites using best available modeling methods. We calculated the levelized unit costs of CO 2 eq sequestration (LCOC; $ tCO 2 eq -1 ). Under baseline assumptions, LCOC was $17,048 tCO 2 eq -1 . Despite annually sequestering 628 tCO 2 eq within kelp biomass at the sink site, the project was only able to net 244 C credits (tCO 2 eq) each year, a true sequestration “additionality” rate (AR) of 39% (i.e., the ratio of net C credits produced to gross C sequestered within kelp biomass). As a result of optimizing 18 key parameters for which we identified a range within the literature, LCOC fell to $1,257 tCO 2 eq -1 and AR increased to 91%, demonstrating that substantial cost reductions could be achieved through process improvement and decarbonization of production supply chains. Kelp CDR may be limited by high production costs and energy intensive operations, as well as MRV uncertainty. To resolve these challenges, R&D must (1) de-risk farm designs that maximize lease space, (2) automate the seeding and harvest processes, (3) leverage selective breeding to increase yields, (4) assess the cost-benefit of gametophyte nursery culture as both a platform for selective breeding and driver of operating cost reductions, (5) decarbonize equipment supply chains, energy usage, and ocean cultivation by sourcing electricity from renewables and employing low GHG impact materials with long lifespans, and (6) develop low-cost and accurate MRV techniques for ocean-based CDR. 
    more » « less
  2. Abstract

    The ratio of dissolved oxygen to argon in seawater is frequently employed to estimate rates of net community production (NCP) in the oceanic mixed layer. The in situ O2/Ar‐based method accounts for many physical factors that influence oxygen concentrations, permitting isolation of the biological oxygen signal produced by the balance of photosynthesis and respiration. However, this technique traditionally relies upon several assumptions when calculating the mixed‐layer O2/Ar budget, most notably the absence of vertical fluxes of O2/Ar and the principle that the air‐sea gas exchange of biological oxygen closely approximates net productivity rates. Employing a Lagrangian study design and leveraging data outputs from a regional physical oceanographic model, we conducted in situ measurements of O2/Ar in the California Current Ecosystem in spring 2016 and summer 2017 to evaluate these assumptions within a “worst‐case” field environment. Quantifying vertical fluxes, incorporating nonsteady state changes in O2/Ar, and comparing NCP estimates evaluated over several day versus longer timescales, we find differences in NCP metrics calculated over different time intervals to be considerable, also observing significant potential effects from vertical fluxes, particularly advection. Additionally, we observe strong diel variability in O2/Ar and NCP rates at multiple stations. Our results reemphasize the importance of accounting for vertical fluxes when interpreting O2/Ar‐derived NCP data and the potentially large effect of nonsteady state conditions on NCP evaluated over shorter timescales. In addition, diel cycles in surface O2/Ar can also bias interpretation of NCP data based on local productivity and the time of day when measurements were made.

     
    more » « less
  3. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  4. null (Ed.)
    Life cycle assessment (LCA), a tool used to assess the environmental impacts of products and processes, has been used to evaluate a range of aquaculture systems. Eighteen LCA studies were reviewed which included assess- ments of recirculating aquaculture systems (RAS), flow-through systems, net cages, and pond systems. This re- view considered the potential to mitigate environmental burdens with a movement from extensive to intensive aquaculture systems. Due to the diversity in study results, specific processes (feed, energy, and infrastructure) and specific impact categories (land use, water use, and eutrophication potential) were analyzed in-depth. The comparative analysis indicated there was a possible shift from local to global impacts with a progression from extensive to intensive systems, if mitigation strategies were not performed. The shift was partially due to increased electricity requirements but also varied with electricity source. The impacts from infrastructure were less than 13 % of the environmental impact and considered negligible. For feed, the environmental impacts were typically more dependent on feed conversion ratio (FCR) than the type of system. Feed also contributed to over 50 % of the impacts on land use, second only to energy carriers. The analysis of water use indicated intensive recirculating systems efficiently reduce water use as compared to extensive systems; however, at present, studies have only considered direct water use and future work is required that incorporates indirect and consumptive water use. Alternative aquaculture systems that can improve the total nutrient uptake and production yield per material and energy based input, thereby reducing the overall emissions per unit of feed, should be further investigated to optimize the overall of aquaculture systems, considering both global and local environmental impacts. While LCA can be a valuable tool to evaluate trade-offs in system designs, the results are often location and species specific. Therefore, it is critical to consider both of these criteria in conjunction with LCA results when developing aquaculture systems. 
    more » « less
  5. Abstract

    Ecological interactions range from purely specialized to extremely generalized in nature. Recent research has showed very high levels of specialization in the cyanolichens involvingPeltigera(mycobionts) and theirNostocphotosynthetic partners (cyanobionts). Yet, little is known about the mechanisms contributing to the establishment and maintenance of such high specialization levels.

    Here, we characterized interactions betweenPeltigeraandNostocpartners at a global scale, using more than one thousand thalli. We used tools from network theory, community phylogenetics and biogeographical history reconstruction to evaluate how these symbiotic interactions may have evolved.

    After splitting the interaction matrix into modules of preferentially interacting partners, we evaluated how module membership might have evolved along the mycobionts’ phylogeny. We also teased apart the contributions of geographical overlap vs phylogeny in driving interaction establishment betweenPeltigeraandNostoctaxa.

    Module affiliation rarely evolves through the splitting of large ancestral modules. Instead, new modules appear to emerge independently, which is often associated with a fungal speciation event. We also found strong phylogenetic signal in these interactions, which suggests that partner switching is constrained by conserved traits. Therefore, it seems that a high rate of fungal diversification following a switch to a new cyanobiont can lead to the formation of large modules, with cyanobionts associating with multiple closely retatedPeltigeraspecies.

    Finally, when restricting our analyses toPeltigerasister species, the latter differed more through partner acquisition/loss than replacement (i.e., switching). This pattern vanishes as we look at sister species that have diverged longer ago. This suggests that fungal speciation may be accompanied by a stepwise process of (a) novel partner acquisition and (b) loss of the ancestral partner. This could explain the maintenance of high specialization levels in this symbiotic system where the transmission of the cyanobiont to the next generation is assumed to be predominantly horizontal.

    Synthesis.Overall, our study suggests that oscillation between generalization and ancestral partner loss may maintain high specialization within the lichen genusPeltigera, and that partner selection is not only driven by partners’ geographical overlap, but also by their phylogenetically conserved traits.

     
    more » « less