skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Universal pictures: A lithophane codex helps teenagers with blindness visualize nanoscopic systems
People with blindness have limited access to the high-resolution graphical data and imagery of science. Here, a lithophane codex is reported. Its pages display tactile and optical readouts for universal visualization of data by persons with or without eyesight. Prototype codices illustrated microscopy of butterfly chitin—fromN-acetylglucosamine monomer to fibril, scale, and whole insect—and were given to high schoolers from the Texas School for the Blind and Visually Impaired. Lithophane graphics of Fischer-Spier esterification reactions and electron micrographs of biological cells were also 3D-printed, along with x-ray structures of proteins (as millimeter-scale 3D models). Students with blindness could visualize (describe, recall, distinguish) these systems—for the first time—at the same resolution as sighted peers (average accuracy = 88%). Tactile visualization occurred alongside laboratory training, synthesis, and mentoring by chemists with blindness, resulting in increased student interest and sense of belonging in science.  more » « less
Award ID(s):
2203441
PAR ID:
10534386
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
AAAS
Date Published:
Journal Name:
Science Advances
Volume:
10
Issue:
2
ISSN:
2375-2548
Page Range / eLocation ID:
eadj8099
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Handheld models help students visualize three-dimensional (3D) objects, especially students with blindness who use large 3D models to visualize imagery by hand. The mouth has finer tactile sensors than hand, which could improve visualization using microscopic models that are portable, inexpensive, and disposable. The mouth remains unused in tactile learning. Here, we created bite-size 3D models of protein molecules from “gummy bear” gelatin or nontoxic resin. Models were made as small as rice grain and could be coded with flavor and packaged like candy. Mouth, hands, and eyesight were tested at identifying specific structures. Students recognized structures by mouth at 85.59% accuracy, similar to recognition by eyesight using computer animation. Recall accuracy of structures was higher by mouth than hand for 40.91% of students, equal for 31.82%, and lower for 27.27%. The convenient use of entire packs of tiny, cheap, portable models can make 3D imagery more accessible to students. 
    more » « less
  2. Abstract Large‐scale digitization projects such as#ScanAllFishesandoVertare generating high‐resolution microCT scans of vertebrates by the thousands. Data from these projects are shared with the community using aggregate 3D specimen repositories like MorphoSource through various open licenses. We anticipate an explosion of quantitative research in organismal biology with the convergence of available data and the methodologies to analyse them.Though the data are available, the road from a series of images to analysis is fraught with challenges for most biologists. It involves tedious tasks of data format conversions, preserving spatial scale of the data accurately, 3D visualization and segmentations, and acquiring measurements and annotations. When scientists use commercial software with proprietary formats, a roadblock for data exchange, collaboration and reproducibility is erected that hurts the efforts of the scientific community to broaden participation in research.We developed SlicerMorph as an extension of 3D Slicer, a biomedical visualization and analysis ecosystem with extensive visualization and segmentation capabilities built on proven python‐scriptable open‐source libraries such as Visualization Toolkit and Insight Toolkit. In addition to the core functionalities of Slicer, SlicerMorph provides users with modules to conveniently retrieve open‐access 3D models or import users own 3D volumes, to annotate 3D curve and patch‐based landmarks, generate landmark templates, conduct geometric morphometric analyses of 3D organismal form using both landmark‐driven and landmark‐free approaches, and create 3D animations from their results. We highlight how these individual modules can be tied together to establish complete workflow(s) from image sequence to morphospace. Our software development efforts were supplemented with short courses and workshops that cover the fundamentals of 3D imaging and morphometric analyses as it applies to study of organismal form and shape in evolutionary biology.Our goal is to establish a community of organismal biologists centred around Slicer and SlicerMorph to facilitate easy exchange of data and results and collaborations using 3D specimens. Our proposition to our colleagues is that using a common open platform supported by a large user and developer community ensures the longevity and sustainability of the tools beyond the initial development effort. 
    more » « less
  3. Astronomy combines the richness and complexity of science and mathematics and captivates the public imagination. While much of astronomy is presented as imagery, many individuals can benefit from alternative methods of presentation such as tactile resources. Within the suite of methods and technologies we employ, we explored the utility of 3D printing to translate astronomical research data and models into tactile, textured forms. As ground work for our program, the STEM Career Exploration Lab (STEM-CEL), we extensively tested the 3D design, developed unique templates for 3D prints, and subsequently incorporated these materials into publicly accessible programs and more formally into in summer camps specifically for students including those with blindness and visual impairment (B/VI) and their educators. This paper traces the important steps of public testing to ensure our 3D prints are robust, understandable, and represent the scientific research data and models with integrity. Our initial testbed program also included a STEM camp project where we assessed students' and educators' interactions with the materials. We determined the 3D prints do stimulate interest in science as well 3D printing technology. The successful pilot testing outcome was integrated in our strategy for our more ambitious program, the STEM-CEL. In this paper, we also briefly discuss the results of the initial testing as well as some specific results from the STEM-CEL regarding our 3D prints for star clusters and galaxies. We used pre- and post-intervention surveys, astronomy assessments, and student and educator interviews, resulting in what is likely the largest research study on astronomy especially for students with B/VI. We found that the experience of holding a planet, the Sun, a star cluster, or a model of a galaxy resonates well with even the most casual interest in astronomy. 
    more » « less
  4. Graphical representations are ubiquitous in the learning and teaching of science, technology, engineering, and mathematics (STEM). However, these materials are often not accessible to the over 547,000 students in the United States with blindness and significant visual impairment, creating barriers to pursuing STEM educational and career pathways. Furthermore, even when such materials are made available to visually impaired students, access is likely through literalized modes (e.g., braille, verbal description), which is problematic as these approaches (1) do not directly convey spatial information and (2) are different from the graphic-based materials used by students without visual impairment. The purpose of this study was to design and evaluate a universally accessible system for communicating graphical representations in STEM classes. By combining a multisensory vibro-audio interface and an app running on consumer mobile hardware, the system is meant to work equally well for all students, irrespective of their visual status. We report the design of the experimental system and the results of an experiment where we compared learning performance with the system to traditional (visual or tactile) diagrams for sighted participants (n = 20) and visually impaired participants (n =9) respectively. While the experimental multimodal diagrammatic system (MDS) did result in significant learning gains for both groups of participants, the results also revealed no statistically significant differences in the capacity for learning from graphical information across both comparison groups. Likewise, there were no statistically significant differences in the capacity for learning from graphical information between the stimuli presented through the experimental system and the traditional (visual or tactile) diagram control conditions, across either participant group. These findings suggest that both groups were able to learn graphical information from the experimental system as well as traditional diagram presentation materials. This learning modality was supported without the need for conversion of the diagrams to make them accessible for participants who required tactile materials. The system also provided additional multisensory information for sighted participants to interpret and answer questions about the diagrams. Findings are interpreted in terms of new universal design principles for producing multisensory graphical representations that would be accessible to all learners. 
    more » « less
  5. We present a design-based exploration of the potential to reinterpret glyph-based visualization of scalar fields on 3D surfaces, a traditional scientific visualization technique, as a data physicalization technique. Even with the best virtual reality displays, users often struggle to correctly interpret spatial relationships in 3D datasets; thus, we are motivated to understand the extent to which traditional scientific visualization methods can translate to physical media where users may simultaneously leverage their visual systems and tactile senses to, in theory, better understand and connect with the data of interest. This pictorial traces the process of our design for a specific user study experiment: (1) inspiration, (2) exploring the data physicalization design space, (3) prototyping with 3D printing, (4) applying the techniques to different synthetic datasets. We call our most recent and compelling visual/tactile design boxcars on potatoes, and the next step in the research is to run a user-based evaluation to elucidate how this design compares to several of the others pictured here. 
    more » « less