skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Universal pictures: A lithophane codex helps teenagers with blindness visualize nanoscopic systems
People with blindness have limited access to the high-resolution graphical data and imagery of science. Here, a lithophane codex is reported. Its pages display tactile and optical readouts for universal visualization of data by persons with or without eyesight. Prototype codices illustrated microscopy of butterfly chitin—fromN-acetylglucosamine monomer to fibril, scale, and whole insect—and were given to high schoolers from the Texas School for the Blind and Visually Impaired. Lithophane graphics of Fischer-Spier esterification reactions and electron micrographs of biological cells were also 3D-printed, along with x-ray structures of proteins (as millimeter-scale 3D models). Students with blindness could visualize (describe, recall, distinguish) these systems—for the first time—at the same resolution as sighted peers (average accuracy = 88%). Tactile visualization occurred alongside laboratory training, synthesis, and mentoring by chemists with blindness, resulting in increased student interest and sense of belonging in science.  more » « less
Award ID(s):
2203441
PAR ID:
10534386
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
AAAS
Date Published:
Journal Name:
Science Advances
Volume:
10
Issue:
2
ISSN:
2375-2548
Page Range / eLocation ID:
eadj8099
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Handheld models help students visualize three-dimensional (3D) objects, especially students with blindness who use large 3D models to visualize imagery by hand. The mouth has finer tactile sensors than hand, which could improve visualization using microscopic models that are portable, inexpensive, and disposable. The mouth remains unused in tactile learning. Here, we created bite-size 3D models of protein molecules from “gummy bear” gelatin or nontoxic resin. Models were made as small as rice grain and could be coded with flavor and packaged like candy. Mouth, hands, and eyesight were tested at identifying specific structures. Students recognized structures by mouth at 85.59% accuracy, similar to recognition by eyesight using computer animation. Recall accuracy of structures was higher by mouth than hand for 40.91% of students, equal for 31.82%, and lower for 27.27%. The convenient use of entire packs of tiny, cheap, portable models can make 3D imagery more accessible to students. 
    more » « less
  2. Abstract Large‐scale digitization projects such as#ScanAllFishesandoVertare generating high‐resolution microCT scans of vertebrates by the thousands. Data from these projects are shared with the community using aggregate 3D specimen repositories like MorphoSource through various open licenses. We anticipate an explosion of quantitative research in organismal biology with the convergence of available data and the methodologies to analyse them.Though the data are available, the road from a series of images to analysis is fraught with challenges for most biologists. It involves tedious tasks of data format conversions, preserving spatial scale of the data accurately, 3D visualization and segmentations, and acquiring measurements and annotations. When scientists use commercial software with proprietary formats, a roadblock for data exchange, collaboration and reproducibility is erected that hurts the efforts of the scientific community to broaden participation in research.We developed SlicerMorph as an extension of 3D Slicer, a biomedical visualization and analysis ecosystem with extensive visualization and segmentation capabilities built on proven python‐scriptable open‐source libraries such as Visualization Toolkit and Insight Toolkit. In addition to the core functionalities of Slicer, SlicerMorph provides users with modules to conveniently retrieve open‐access 3D models or import users own 3D volumes, to annotate 3D curve and patch‐based landmarks, generate landmark templates, conduct geometric morphometric analyses of 3D organismal form using both landmark‐driven and landmark‐free approaches, and create 3D animations from their results. We highlight how these individual modules can be tied together to establish complete workflow(s) from image sequence to morphospace. Our software development efforts were supplemented with short courses and workshops that cover the fundamentals of 3D imaging and morphometric analyses as it applies to study of organismal form and shape in evolutionary biology.Our goal is to establish a community of organismal biologists centred around Slicer and SlicerMorph to facilitate easy exchange of data and results and collaborations using 3D specimens. Our proposition to our colleagues is that using a common open platform supported by a large user and developer community ensures the longevity and sustainability of the tools beyond the initial development effort. 
    more » « less
  3. Graphical representations are ubiquitous in the learning and teaching of science, technology, engineering, and mathematics (STEM). However, these materials are often not accessible to the over 547,000 students in the United States with blindness and significant visual impairment, creating barriers to pursuing STEM educational and career pathways. Furthermore, even when such materials are made available to visually impaired students, access is likely through literalized modes (e.g., braille, verbal description), which is problematic as these approaches (1) do not directly convey spatial information and (2) are different from the graphic-based materials used by students without visual impairment. The purpose of this study was to design and evaluate a universally accessible system for communicating graphical representations in STEM classes. By combining a multisensory vibro-audio interface and an app running on consumer mobile hardware, the system is meant to work equally well for all students, irrespective of their visual status. We report the design of the experimental system and the results of an experiment where we compared learning performance with the system to traditional (visual or tactile) diagrams for sighted participants (n = 20) and visually impaired participants (n =9) respectively. While the experimental multimodal diagrammatic system (MDS) did result in significant learning gains for both groups of participants, the results also revealed no statistically significant differences in the capacity for learning from graphical information across both comparison groups. Likewise, there were no statistically significant differences in the capacity for learning from graphical information between the stimuli presented through the experimental system and the traditional (visual or tactile) diagram control conditions, across either participant group. These findings suggest that both groups were able to learn graphical information from the experimental system as well as traditional diagram presentation materials. This learning modality was supported without the need for conversion of the diagrams to make them accessible for participants who required tactile materials. The system also provided additional multisensory information for sighted participants to interpret and answer questions about the diagrams. Findings are interpreted in terms of new universal design principles for producing multisensory graphical representations that would be accessible to all learners. 
    more » « less
  4. We present a design-based exploration of the potential to reinterpret glyph-based visualization of scalar fields on 3D surfaces, a traditional scientific visualization technique, as a data physicalization technique. Even with the best virtual reality displays, users often struggle to correctly interpret spatial relationships in 3D datasets; thus, we are motivated to understand the extent to which traditional scientific visualization methods can translate to physical media where users may simultaneously leverage their visual systems and tactile senses to, in theory, better understand and connect with the data of interest. This pictorial traces the process of our design for a specific user study experiment: (1) inspiration, (2) exploring the data physicalization design space, (3) prototyping with 3D printing, (4) applying the techniques to different synthetic datasets. We call our most recent and compelling visual/tactile design boxcars on potatoes, and the next step in the research is to run a user-based evaluation to elucidate how this design compares to several of the others pictured here. 
    more » « less
  5. Recent advances in Artificial Intelligence and Machine Learning (e.g., AlphaFold, RosettaFold, and ESMFold) enable prediction of three-dimensional (3D) protein structures from amino acid sequences alone at accuracies comparable to lower-resolution experimental methods. These tools have been employed to predict structures across entire proteomes and the results of large-scale metagenomic sequence studies, yielding an exponential increase in available biomolecular 3D structural information. Given the enormous volume of this newly computed biostructure data, there is an urgent need for robust tools to manage, search, cluster, and visualize large collections of structures. Equally important is the capability to efficiently summarize and visualize metadata, biological/biochemical annotations, and structural features, particularly when working with vast numbers of protein structures of both experimental origin from the Protein Data Bank (PDB) and computationally-predicted models. Moreover, researchers require advanced visualization techniques that support interactive exploration of multiple sequences and structural alignments. This paper introduces a suite of tools provided on the RCSB PDB research-focused web portal RCSB. org, tailor-made for efficient management, search, organization, and visualization of this burgeoning corpus of 3D macromolecular structure data. 
    more » « less