Abstract The marine tintinnid ciliate Amphorides quadrilineata is a feeding-current feeder, creating flows for particle encounter, capture and rejection. Individual-level behaviors were observed using high-speed, high-magnification digital imaging. Cells beat their cilia backward to swim forward, simultaneously generating a feeding current that brings in particles. These particles are then individually captured through localized ciliary reversals. When swimming backward, cells beat their cilia forward (=ciliary reversals involving the entire ring of cilia), actively rejecting unwanted particles. Cells achieve path-averaged speeds averaging 3–4 total lengths per second. Both micro-particle image velocimetry and computational fluid dynamics were employed to characterize the cell-scale flows. Forward swimming generates a feeding current, a saddle flow vector field in front of the cell, whereas backward swimming creates an inverse saddle flow vector field behind the cell; these ciliary flows facilitate particle encounter, capture and rejection. The model-tintinnid with a full-length lorica achieves an encounter rate Q ~29% higher than that without a lorica, albeit at a ~142% increase in mechanical power and a decrease in quasi-propulsive efficiency (~0.24 vs. ~ 0.38). It is also suggested that Q can be approximated by π(W/2 + l)2U, where W, l and U represent the lorica oral diameter, ciliary length and swimming speed, respectively.
more »
« less
Video Dataset of Respiratory Cilia Motion Phenotypes
Respiratory cilia are important components in the lung defense mechanism. The coordinated beating of cilia cleans the airways of pathogens and foreign particles. We present a large-scale validation dataset of cilia motion for characterizing ciliary function. Ciliary beat frequency (CBF) is provided as benchmark metrics. The video dataset of cilia motion phenotypes contains four categories: temperatures, drugs and ACE2 manipulation. Under each category, mouse trachea samples were treated with different stimuli and imaged with a high-speed video microscope to acquire cilia motion. In addition, we generate ground truth masks labeling ciliary area for image segmentation. This validation dataset can serve as a benchmark for the computer vision community to develop models for analyzing ciliary beat pattern. This video dataset contains 872 videos and their ground-truth masks with the ciliary area labeled. The videos were recorded at 250 frames per second for 1 second. The image size is 800x800. Each pixel is 0.07987 μm. The csv file contains the CBF values of each video.
more »
« less
- Award ID(s):
- 1845915
- PAR ID:
- 10528496
- Publisher / Repository:
- Zenodo
- Date Published:
- Subject(s) / Keyword(s):
- Cilia motion, CBF, image segmentation
- Format(s):
- Medium: X
- Location:
- Zenodo
- Right(s):
- Creative Commons Attribution 4.0 International; Open Access
- Institution:
- University of Georgia
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Discher, Dennis (Ed.)Hydrodynamic flow produced by multiciliated cells is critical for fluid circulation and cell motility. Hundreds of cilia beat with metachronal synchrony for fluid flow. Cilia-driven fluid flow produces extracellular hydrodynamic forces that cause neighboring cilia to beat in a synchronized manner. However, hydrodynamic coupling between neighboring cilia is not the sole mechanism that drives cilia synchrony. Cilia are nucleated by basal bodies (BBs) that link to each other and to the cell’s cortex via BB-associated appendages. The intracellular BB and cortical network is hypothesized to synchronize ciliary beating by transmitting cilia coordination cues. The extent of intracellular ciliary connections and the nature of these stimuli remain unclear. Moreover, how BB connections influence the dynamics of individual cilia has not been established. We show by focused ion beam scanning electron microscopy imaging that cilia are coupled both longitudinally and laterally in the ciliate Tetrahymena thermophila by the underlying BB and cortical cytoskeletal network. To visualize the behavior of individual cilia in live, immobilized Tetrahymena cells, we developed Delivered Iron Particle Ubiety Live Light (DIPULL) microscopy. Quantitative and computer analyses of ciliary dynamics reveal that BB connections control ciliary waveform and coordinate ciliary beating. Loss of BB connections reduces cilia-dependent fluid flow forces.more » « less
-
We introduce a novel method for summarization of whiteboard lecture videos using key handwritten content regions. A deep neural network is used for detecting bounding boxes that contain semantically meaningful groups of handwritten content. A neural network embedding is learnt, under triplet loss, from the detected regions in order to discriminate between unique handwritten content. The detected regions along with embeddings at every frame of the lecture video are used to extract unique handwritten content across the video which are presented as the video summary. Additionally, a spatiotemporal index is constructed from the video which records the time and location of each individual summary region in the video which can potentially be used for content-based search and navigation. We train and test our methods on the publicly available AccessMath dataset. We use the DetEval scheme to benchmark our summarization by recall of unique ground truth objects (92.09%) and average number of summary regions (128) compared to the ground truth (88).more » « less
-
Advances in neural fields are enablling high-fidelity capture of shape and appearance of dynamic 3D scenes. However, this capbabilities lag behind those offered by conventional representations such as 2D videos because of algorithmic challenges and the lack of large-scale multi-view real-world datasets. We address the dataset limitations with DiVa-360, a real-world 360° dynamic visual dataset that contains synchronized high-resolution and long-duration multi-view video sequences of table-scale scenes captured using a customized low-cost system with 53 cameras. It contains 21 object-centric sequences categorized by different motion types, 25 intricate hand-object interaction sequences, and 8 long-duration sequences for a total of 17.4M frames. In addition, we provide foreground-background segmentation masks, synchronized audio, and text descriptions. We benchmark the state-of-the-art dynamic neural field methods on DiVa-360 and provide insights about existing methods and future challenges on long-duration neural field capture.more » « less
-
Advances in neural fields are enablling high-fidelity capture of shape and appearance of dynamic 3D scenes. However, this capbabilities lag behind those offered by conventional representations such as 2D videos because of algorithmic challenges and the lack of large-scale multi-view real-world datasets. We address the dataset limitations with DiVa-360, a real-world 360° dynamic visual dataset that contains synchronized high-resolution and long-duration multi-view video sequences of table-scale scenes captured using a customized low-cost system with 53 cameras. It contains 21 object-centric sequences categorized by different motion types, 25 intricate hand-object interaction sequences, and 8 long-duration sequences for a total of 17.4M frames. In addition, we provide foreground-background segmentation masks, synchronized audio, and text descriptions. We benchmark the state-of-the-art dynamic neural field methods on DiVa-360 and provide insights about existing methods and future challenges on long-duration neural field capture.more » « less