skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Using deep learning for the automated identification of cone and rod photoreceptors from adaptive optics imaging of the human retina
Adaptive optics imaging has enabled the enhanced in vivo retinal visualization of individual cone and rod photoreceptors. Effective analysis of such high-resolution, feature rich images requires automated, robust algorithms. This paper describes RC-UPerNet, a novel deep learning algorithm, for identifying both types of photoreceptors, and was evaluated on images from central and peripheral retina extending out to 30° from the fovea in the nasal and temporal directions. Precision, recall and Dice scores were 0.928, 0.917 and 0.922 respectively for cones, and 0.876, 0.867 and 0.870 for rods. Scores agree well with human graders and are better than previously reported AI-based approaches.  more » « less
Award ID(s):
2133650
PAR ID:
10370317
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Volume:
13
Issue:
10
ISSN:
2156-7085
Format(s):
Medium: X Size: Article No. 5082
Size(s):
Article No. 5082
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract During vertebrate retinal development, transient populations of retinal progenitor cells with restricted cell fate choices are formed. One of these progenitor populations expresses the Thrb gene and can be identified by activity of the ThrbCRM1 cis-regulatory element. Short-term assays have concluded that these cells preferentially generate cone photoreceptors and horizontal cells, however developmental timing has precluded an extensive cell type characterization of their progeny. Here we describe the development and validation of a recombinase-based lineage tracing system for the chicken embryo to further characterize the lineage of these cells. The ThrbCRM1 element was found to preferentially form photoreceptors and horizontal cells, as well as a small number of retinal ganglion cells. The photoreceptor cell progeny are exclusively cone photoreceptors and not rod photoreceptors, confirming that ThrbCRM1 progenitor cells are restricted from the rod fate. In addition, specific subtypes of horizontal cells and retinal ganglion cells were overrepresented, suggesting that ThrbCRM1 progenitor cells are not only restricted for cell type, but for cell subtype as well. 
    more » « less
  2. Cubomedusae, or box jellyfish, have a complex visual system comprising 24 eyes of four types. Like other cnidarians, their photoreceptor cells are ciliary in morphology, and a range of different techniques together show that at least two of the eye types—the image-forming upper and lower lens eyes—express opsin as the photopigment. The photoreceptors of these two eye types express the same opsin ( Tc LEO ), which belongs to the cnidarian-specific clade cnidops. Interestingly, molecular work has found a high number of opsin genes in box jellyfish, especially in the Caribbean species Tripedalia cystophora , most of which are of unknown function. In the current study, we raised antibodies against three out of five opsins identified from transcriptomic data from T. cystophora and used them to map the expression patterns. These expression patterns suggest one opsin as the photopigment in the slit eyes and another as a putative photoisomerase found in photoreceptors of all four eyes types. The last antibody stained nerve-like cells in the tentacles, in connection with nematocytes, and the radial nerve, in connection with the gonads. This is the first time photopigment expression has been localized to the outer segments of the photoreceptors in a cnidarian ocellus (simple eye). The potential presence of a photoisomerase could be another interesting convergence between box jellyfish and vertebrate photoreceptors, but it awaits final experimental proof. 
    more » « less
  3. Abstract This study aimed to develop a valid and reliable instrument, the Mental Images of Scientists Questionnaire (MISQ), and use the instrument to examine Chinese students’ mental images of scientists’ characters across school levels, regions, living settings, and gender. The final version of theMISQconsisted of four constructs: scientists’ cognitive, affective, lifestyle, and job characters. The results showed that senior high school students gave higher scores for scientists’ cognitive character construct than junior high and elementary school students did. Students from eastern regions, which have a more highly developed economy, gave the highest scores on cognitive and affective character constructs of scientists. Students from western regions, which have a less developed economy, had a relatively negative impression of scientists. Students’ images of scientists’ affective, lifestyle, and job characters were positively correlated with their interests in pursuing scientific careers. Future research to explore the relationships between students’ mental images of scientists’ characters and students’ motivation to pursue science-related careers or to engage in scientific practices are recommended. 
    more » « less
  4. Multimodal machine learning algorithms aim to learn visual-textual correspondences. Previous work suggests that concepts with concrete visual manifestations may be easier to learn than concepts with abstract ones. We give an algorithm for automatically computing the visual concreteness of words and topics within multimodal datasets. We apply the approach in four settings, ranging from image captions to images/text scraped from historical books. In addition to enabling explorations of concepts in multimodal datasets, our concreteness scores predict the capacity of machine learning algorithms to learn textual/visual relationships. We find that 1) concrete concepts are indeed easier to learn; 2) the large number of algorithms we consider have similar failure cases; 3) the precise positive relationship between concreteness and performance varies between datasets. We conclude with recommendations for using concreteness scores to facilitate future multimodal research. 
    more » « less
  5. Multimodal machine learning algorithms aim to learn visual-textual correspondences. Previous work suggests that concepts with concrete visual manifestations may be easier to learn than concepts with abstract ones. We give an algorithm for automatically computing the visual concreteness of words and topics within multimodal datasets. We apply the approach in four settings, ranging from image captions to images/text scraped from historical books. In addition to enabling explorations of concepts in multi-modal datasets, our concreteness scores predict the capacity of machine learning algorithms to learn textual/visual relationships. We find that 1) concrete concepts are indeed easier to learn; 2) the large number of algorithms we consider have similar failure cases; 3) the precise positive relationship between concreteness and performance varies between datasets. We conclude with recommendations for using concreteness scores to facilitate future multi- modal research. 
    more » « less