skip to main content


Title: Open-source workflow design and management software to interrogate duckweed growth conditions and stress responses
Abstract

Duckweeds, a family of floating aquatic plants, are ideal model plants for laboratory experiments because they are small, easy to cultivate, and reproduce quickly. Duckweed cultivation, for the purposes of scientific research, requires that lineages are maintained as continuous populations of asexually propagating fronds, so research teams need to develop optimized cultivation conditions and coordinate maintenance tasks for duckweed stocks. Additionally, computational image analysis is proving to be a powerful duckweed research tool, but researchers lack software tools to assist with data collection and storage in a way that can feed into scripted data analysis. We set out to support these processes using a laboratory management software called Aquarium, an open-source application developed to manage laboratory inventory and plan experiments. We developed a suite of duckweed cultivation and experimentation operation types in Aquarium, which we then integrated with novel data analysis scripts. We then demonstrated the efficacy of our system with a series of image-based growth assays, and explored how our framework could be used to develop optimized cultivation protocols. We discuss the unexpected advantages and the limitations of this approach, suggesting areas for future software tool development. In its current state, our approach helps to bridge the gap between laboratory implementation and data analytical software for duckweed biologists and builds a foundation for future development of end-to-end computational tools in plant science.

 
more » « less
NSF-PAR ID:
10452406
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Plant Methods
Volume:
19
Issue:
1
ISSN:
1746-4811
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Responding to the need to teach remotely due to COVID-19, we used readily available computational approaches (and developed associated tutorials (https://mdh-cures-community.squarespace.com/virtual-cures-and-ures)) to teach virtual Course-Based Undergraduate Research Experience (CURE) laboratories that fulfil generally accepted main components of CUREs or Undergraduate Research Experiences (UREs): Scientific Background, Hypothesis Development, Proposal, Experiments, Teamwork, Data Analysis, Conclusions, and Presentation1. We then developed and taught remotely, in three phases, protein-centric CURE activities that are adaptable to virtually any protein, emphasizing contributions of noncovalent interactions to structure, binding and catalysis (an ASBMB learning framework2 foundational concept). The courses had five learning goals (unchanged in the virtual format),focused on i) use of primary literature and bioinformatics, ii) the roles of non-covalent interactions, iii) keeping accurate laboratory notebooks, iv) hypothesis development and research proposal writing, and, v) presenting the project and drawing evidence based conclusions The first phase, Developing a Research Proposal, contains three modules, and develops hallmarks of a good student-developed hypothesis using available literature (PubMed3) and preliminary observations obtained using bioinformatics, Module 1: Using Primary Literature and Data Bases (Protein Data Base4, Blast5 and Clustal Omega6), Module 2: Molecular Visualization (PyMol7 and Chimera8), culminating in a research proposal (Module 3). Provided rubrics guide student expectations. In the second phase, Preparing the Proteins, students prepared necessary proteins and mutants using Module 4: Creating and Validating Models, which leads users through creating mutants with PyMol, homology modeling with Phyre29 or Missense10, energy minimization using RefineD11 or ModRefiner12, and structure validation using MolProbity13. In the third phase, Computational Experimental Approaches to Explore the Questions developed from the Hypothesis, students selected appropriate tools to perform their experiments, chosen from computational techniques suitable for a CURE laboratory class taught remotely. Questions, paired with computational approaches were selected from Modules 5: Exploring Titratable Groups in a Protein using H++14, 6: Exploring Small Molecule Ligand Binding (with SwissDock15), 7: Exploring Protein-Protein Interaction (with HawkDock16), 8: Detecting and Exploring Potential Binding Sites on a Protein (with POCASA17 and SwissDock), and 9: Structure-Activity Relationships of Ligand Binding & Drug Design (with SwissDock, Open Eye18 or the Molecular Operating Environment (MOE)19). All involve freely available computational approaches on publicly accessible web-based servers around the world (with the exception of MOE). Original literature/Journal club activities on approaches helped students suggest tie-ins to wet lab experiments they could conduct in the future to complement their computational approaches. This approach allowed us to continue using high impact CURE teaching, without changing our course learning goals. Quantitative data (including replicates) was collected and analyzed during regular class periods. Students developed evidence-based conclusions and related them to their research questions and hypotheses. Projects culminated in a presentation where faculty feedback was facilitated with the Virtual Presentation platform from QUBES20 These computational approaches are readily adaptable for topics accessible for first to senior year classes and individual research projects (UREs). We used them in both partial and full semester CUREs in various institutional settings. We believe this format can benefit faculty and students from a wide variety of teaching institutions under conditions where remote teaching is necessary. 
    more » « less
  2. Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. 
    more » « less
  3. Abstract Summary

    Over the past decade, short-read sequence alignment has become a mature technology. Optimized algorithms, careful software engineering and high-speed hardware have contributed to greatly increased throughput and accuracy. With these improvements, many opportunities for performance optimization have emerged. In this review, we examine three general-purpose short-read alignment tools—BWA-MEM, Bowtie 2 and Arioc—with a focus on performance optimization. We analyze the performance-related behavior of the algorithms and heuristics each tool implements, with the goal of arriving at practical methods of improving processing speed and accuracy. We indicate where an aligner's default behavior may result in suboptimal performance, explore the effects of computational constraints such as end-to-end mapping and alignment scoring threshold, and discuss sources of imprecision in the computation of alignment scores and mapping quality. With this perspective, we describe an approach to tuning short-read aligner performance to meet specific data-analysis and throughput requirements while avoiding potential inaccuracies in subsequent analysis of alignment results. Finally, we illustrate how this approach avoids easily overlooked pitfalls and leads to verifiable improvements in alignment speed and accuracy.

    Contact

    richard.wilton@jhu.edu

    Supplementary information

    Appendices referenced in this article are available at Bioinformatics online.

     
    more » « less
  4. null (Ed.)
    The Tweet Collection Management (TWT) Team aims to ingest 5 billion tweets, clean this data, analyze the metadata present, extract key information, classify tweets into categories, and finally, index these tweets into Elasticsearch to browse and query. The main deliverable of this project is a running software application for searching tweets and for viewing Twitter collections from Digital Library Research Laboratory (DLRL) event archive projects. As a starting point, we focused on two development goals: (1) hashtag-based and (2) username-based search for tweets. For IR1, we completed extraction of two fields within our sample collection: hashtags and username. Sample code for TwiRole, a user-classification program, was investigated for use in our project. We were able to sample from multiple collections of tweets, spanning topics like COVID-19 and hurricanes. Initial work encompassed using a sample collection, provided via Google Drive. An NFS-based persistent storage was later involved to allow access to larger collections. In total, we have developed 9 services to extract key information like username, hashtags, geo-location, and keywords from tweets. We have also developed services to allow for parsing and cleaning of raw API data, and backup of data in an Apache Parquet filestore. All services are Dockerized and added to the GitLab Container Registry. The services are deployed in the CS cloud cluster to integrate services into the full search engine workflow. A service is created to convert WARC files to JSON for reading archive files into the application. Unit testing of services is complete and end-to-end tests have been conducted to improve system robustness and avoid failure during deployment. The TWT team has indexed 3,200 tweets into the Elasticsearch index. Future work could involve parallelization of the extraction of metadata, an alternative feature-flag approach, advanced geo-location inference, and adoption of the DMI-TCAT format. Key deliverables include a data body that allows for search, sort, filter, and visualization of raw tweet collections and metadata analysis; a running software application for searching tweets and for viewing Twitter collections from Digital Library Research Laboratory (DLRL) event archive projects; and a user guide to assist those using the system. 
    more » « less
  5. Abstract

    Accelerating the design and development of new advanced materials is one of the priorities in modern materials science. These efforts are critically dependent on the development of comprehensive materials cyberinfrastructures which enable efficient data storage, management, sharing, and collaboration as well as integration of computational tools that help establish processing–structure–property relationships. In this contribution, we present implementation of such computational tools into a cloud-based platform called BisQue (Kvilekval et al., Bioinformatics 26(4):554, 2010). We first describe the current state of BisQue as an open-source platform for multidisciplinary research in the cloud and its potential for 3D materials science. We then demonstrate how new computational tools, primarily aimed at processing–structure–property relationships, can be implemented into the system. Specifically, in this work, we develop a module for BisQue that enables microstructure-sensitive predictions of effective yield strength of two-phase materials. Towards this end, we present an implementation of a computationally efficient data-driven model into the BisQue platform. The new module is made available online (web address:https://bisque.ece.ucsb.edu/module_service/Composite_Strength/) and can be used from a web browser without any special software and with minimal computational requirements on the user end. The capabilities of the module for rapid property screening are demonstrated in case studies with two different methodologies based on datasets containing 3D microstructure information from (i) synthetic generation and (ii) sampling large 3D volumes obtained in experiments.

     
    more » « less