skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Perspective on Developing Modeling and Image Analysis Tools to Investigate Mechanosensing Proteins
Synopsis The shift of funding organizations to prioritize interdisciplinary work points to the need for workflow models that better accommodate interdisciplinary studies. Most scientists are trained in a specific field and are often unaware of the kind of insights that other disciplines could contribute to solving various problems. In this paper, we present a perspective on how we developed an experimental pipeline between a microscopy and image analysis/bioengineering lab. Specifically, we connected microscopy observations about a putative mechanosensing protein, obscurin, to image analysis techniques that quantify cell changes. While the individual methods used are well established (fluorescence microscopy; ImageJ WEKA and mTrack2 programs; MATLAB), there are no existing best practices for how to integrate these techniques into a cohesive, interdisciplinary narrative. Here, we describe a broadly applicable workflow of how microscopists can more easily quantify cell properties (e.g., perimeter, velocity) from microscopy videos of eukaryotic (MDCK) adherent cells. Additionally, we give examples of how these foundational measurements can create more complex, customizable cell mechanics tools and models.  more » « less
Award ID(s):
2233770
PAR ID:
10456665
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Integrative And Comparative Biology
ISSN:
1540-7063
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract IntroductionTraction force microscopy (TFM) is a widely used technique to measure cell contractility on compliant substrates that mimic the stiffness of human tissues. For every step in a TFM workflow, users make choices which impact the quantitative results, yet many times the rationales and consequences for making these decisions are unclear. We have found few papers which show the complete experimental and mathematical steps of TFM, thus obfuscating the full effects of these decisions on the final output. MethodsTherefore, we present this “Field Guide” with the goal to explain the mathematical basis of common TFM methods to practitioners in an accessible way. We specifically focus on how errors propagate in TFM workflows given specific experimental design and analytical choices. ResultsWe cover important assumptions and considerations in TFM substrate manufacturing, substrate mechanical properties, imaging techniques, image processing methods, approaches and parameters used in calculating traction stress, and data-reporting strategies. ConclusionsBy presenting a conceptual review and analysis of TFM-focused research articles published over the last two decades, we provide researchers in the field with a better understanding of their options to make more informed choices when creating TFM workflows depending on the type of cell being studied. With this review, we aim to empower experimentalists to quantify cell contractility with confidence. 
    more » « less
  2. BackgroundWe performed a systematic review that identified at least 9,000 scientific papers on PubMed that include immunofluorescent images of cells from the central nervous system (CNS). These CNS papers contain tens of thousands of immunofluorescent neural images supporting the findings of over 50,000 associated researchers. While many existing reviews discuss different aspects of immunofluorescent microscopy, such as image acquisition and staining protocols, few papers discuss immunofluorescent imaging from an image-processing perspective. We analyzed the literature to determine the image processing methods that were commonly published alongside the associated CNS cell, microscopy technique, and animal model, and highlight gaps in image processing documentation and reporting in the CNS research field. MethodsWe completed a comprehensive search of PubMed publications using Medical Subject Headings (MeSH) terms and other general search terms for CNS cells and common fluorescent microscopy techniques. Publications were found on PubMed using a combination of column description terms and row description terms. We manually tagged the comma-separated values file (CSV) metadata of each publication with the following categories: animal or cell model, quantified features, threshold techniques, segmentation techniques, and image processing software. ResultsOf the almost 9,000 immunofluorescent imaging papers identified in our search, only 856 explicitly include image processing information. Moreover, hundreds of the 856 papers are missing thresholding, segmentation, and morphological feature details necessary for explainable, unbiased, and reproducible results. In our assessment of the literature, we visualized current image processing practices, compiled the image processing options from the top twelve software programs, and designed a road map to enhance image processing. We determined that thresholding and segmentation methods were often left out of publications and underreported or underutilized for quantifying CNS cell research. DiscussionLess than 10% of papers with immunofluorescent images include image processing in their methods. A few authors are implementing advanced methods in image analysis to quantify over 40 different CNS cell features, which can provide quantitative insights in CNS cell features that will advance CNS research. However, our review puts forward that image analysis methods will remain limited in rigor and reproducibility without more rigorous and detailed reporting of image processing methods. ConclusionImage processing is a critical part of CNS research that must be improved to increase scientific insight, explainability, reproducibility, and rigor. 
    more » « less
  3. The advent of advanced robotic platforms and workflow automation tools has revolutionized the landscape of biological research, offering unprecedented levels of precision, reproducibility, and versatility in experimental design. In this work, we present an automated and modular workflow for exploring cell behavior in two-dimensional culture systems. By integrating the BioAssemblyBot® (BAB) robotic platform and the BioApps™ workflow automater with live-cell fluorescence microscopy, our workflow facilitates execution and analysis of in vitro migration and proliferation assays. Robotic assistance and automation allow for the precise and reproducible creation of highly customizable cell-free zones (CFZs), or wounds, in cell monolayers and “hands-free,” schedulable integration with real-time monitoring systems for cellular dynamics. CFZs are designed as computer-aided design models and recreated in confluent cell layers by the BAB 3D-Bioprinting tool. The dynamics of migration and proliferation are evaluated in individual cells using live-cell fluorescence microscopy and an in-house pipeline for image processing and single-cell tracking. Our robotics-assisted approach outperforms manual scratch assays with enhanced reproducibility, adaptability, and precision. The incorporation of automation further facilitates increased flexibility in wound geometry and allows for many experimental conditions to be analyzed in parallel. Unlike traditional cell migration assays, our workflow offers an adjustable platform that can be tailored to a wide range of applications with high-throughput capability. The key features of this system, including its scalability, versatility, and the ability to maintain a high degree of experimental control, position it as a valuable tool for researchers across various disciplines. 
    more » « less
  4. Neueder, Andreas (Ed.)
    Light microscopy methods have continued to advance allowing for unprecedented analysis of various cell types in tissues including the brain. Although the functional state of some cell types such as microglia can be determined by morphometric analysis, techniques to perform robust, quick, and accurate measurements have not kept pace with the amount of imaging data that can now be generated. Most of these image segmentation tools are further burdened by an inability to assess structures in three-dimensions. Despite the rise of machine learning techniques, the nature of some biological structures prevents the training of several current day implementations. Here we present PrestoCell, a novel use of persistence-based clustering to segment cells in light microscopy images, as a customized Python-based tool that leverages the free multidimensional image viewer Napari. In evaluating and comparing PrestoCell to several existing tools, including 3DMorph, Omipose, and Imaris, we demonstrate that PrestoCell produces image segmentations that rival these solutions. In particular, our use of cell nuclei information resulted in the ability to correctly segment individual cells that were interacting with one another to increase accuracy. These benefits are in addition to the simplified graphically based user refinement of cell masks that does not require expensive commercial software licenses. We further demonstrate that PrestoCell can complete image segmentation in large samples from light sheet microscopy, allowing quantitative analysis of these large datasets. As an open-source program that leverages freely available visualization software, with minimum computer requirements, we believe that PrestoCell can significantly increase the ability of users without data or computer science expertise to perform complex image analysis. 
    more » « less
  5. Abstract Image‐based machine learning tools are an ascendant ‘big data’ research avenue. Citizen science platforms, like iNaturalist, and museum‐led initiatives provide researchers with an abundance of data and knowledge to extract. These include extraction of metadata, species identification, and phenomic data. Ecological and evolutionary biologists are increasingly using complex, multi‐step processes on data. These processes often include machine learning techniques, often built by others, that are difficult to reuse by other members in a collaboration.We present a conceptual workflow model for machine learning applications using image data to extract biological knowledge in the emerging field of imageomics. We derive an implementation of this conceptual workflow for a specific imageomics application that adheres to FAIR principles as a formal workflow definition that allows fully automated and reproducible execution, and consists of reusable workflow components.We outline technologies and best practices for creating an automated, reusable and modular workflow, and we show how they promote the reuse of machine learning models and their adaptation for new research questions. This conceptual workflow can be adapted: it can be semi‐automated, contain different components than those presented here, or have parallel components for comparative studies.We encourage researchers—both computer scientists and biologists—to build upon this conceptual workflow that combines machine learning tools on image data to answer novel scientific questions in their respective fields. 
    more » « less