Teachers of the visually impaired (TVIs) regularly present tactile materials (tactile graphics, 3D models, and real objects) to students with vision impairments. Researchers have been increasingly interested in designing tools to support the use of tactile materials, but we still lack an in-depth understanding of how tactile materials are created and used in practice today. To address this gap, we conducted interviews with 21 TVIs and a 3-week diary study with eight of them. We found that tactile materials were regularly used for academic as well as non-academic concepts like tactile literacy, motor ability, and spatial awareness. Real objects and 3D models served as “stepping stones” to tactile graphics and our participants preferred to teach with 3D models, despite finding them difficult to create, obtain, and modify. Use of certain materials also carried social implications; participants selected materials that fostered student independence and allow classroom inclusion. We contribute design considerations, encouraging future work on tactile materials to enable student and TVI co-creation, facilitate rapid prototyping, and promote movement and spatial awareness. To support future research in this area, our paper provides a fundamental understanding of current practices. We bridge these practices to established pedagogical approaches and highlight opportunities for growth regarding this important genre of educational materials.
more »
« less
RoboGraphics: Dynamic Tactile Graphics Powered by Mobile Robots
Tactile graphics are a common way to present information to people with vision impairments. Tactile graphics can be used to explore a broad range of static visual content but aren’t well suited to representing animation or interactivity. We introduce a new approach to creating dynamic tactile graphics that combines a touch screen tablet, static tactile overlays, and small mobile robots. We introduce a prototype system called RoboGraphics and several proof-of-concept applications. We evaluated our prototype with seven participants with varying levels of vision, comparing the RoboGraphics approach to a flat screen, audio-tactile interface. Our results show that dynamic tactile graphics can help visually impaired participants explore data quickly and accurately.
more »
« less
- Award ID(s):
- 1652907
- PAR ID:
- 10165067
- Date Published:
- Journal Name:
- ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility
- Page Range / eLocation ID:
- 318 to 328
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Introduction:The current work probes the effectiveness of multimodal touch screen tablet electronic devices in conveying science, technology, engineering, and mathematics graphics via vibrations and sounds to individuals who are visually impaired (i.e., blind or low vision) and compares it with similar graphics presented in an embossed format. Method:A volunteer sample of 22 participants who are visually impaired, selected from a summer camp and local schools for blind students, were recruited for the current study. Participants were first briefly (∼30 min) trained on how to explore graphics via a multimodal touch screen tablet. They then explored six graphic types (number line, table, pie chart, bar chart, line graph, and map) displayed via embossed paper and tablet. Participants answered three content questions per graphic type following exploration. Results:Participants were only 6% more accurate when answering questions regarding an embossed graphic as opposed to a tablet graphic. A paired-samples t test indicated that this difference was not significant, t(14) = 1.91, p = .07. Follow-up analyses indicated that presentation medium did not interact with graphic type, F(5, 50) = 0.43, p = .83, nor visual ability, F(1, 13) = 0.00, p = .96. Discussion:The findings demonstrate that multimodal touch screen tablets may be comparable to embossed graphics in conveying iconographic science and mathematics content to individuals with visual impairments, regardless of the severity of impairment. The relative equivalence in response accuracy between mediums was unexpected, given that most students who participated were braille readers and had experience reading embossed graphics, whereas they were introduced to the tablet the day of testing. Implications for practitioners:This work illustrates that multimodal touch screen tablets may be an effective option for general education teachers or teachers of students with visual impairments to use in their educational practices. Currently, preparation of accessible graphics is time consuming and requires significant preparation, but such tablets provide solutions for offering “real-time” displays of these graphics for presentation in class.more » « less
-
Grasping is a crucial task in robotics, necessitating tactile feedback and reactive grasping adjustments for robust grasping of objects under various conditions and with differing physical properties. In this paper, we introduce LeTac-MPC, a learning-based model predictive control (MPC) for tactile-reactive grasping. Our approach enables the gripper to grasp objects with different physical properties on dynamic and force-interactive tasks. We utilize a vision-based tactile sensor, GelSight [1], which is capable of perceiving high-resolution tactile feedback that contains information on the physical properties and states of the grasped object. LeTac-MPC incorporates a differentiable MPC layer designed to model the embeddings extracted by a neural network (NN) from tactile feedback. This design facilitates convergent and robust grasping control at a frequency of 25 Hz. We propose a fully automated data collection pipeline and collect a dataset only using standardized blocks with different physical properties. However, our trained controller can generalize to daily objects with different sizes, shapes, materials, and textures. The experimental results demonstrate the effectiveness and robustness of the proposed approach. We compare LeTac-MPC with two purely model-based tactile-reactive controllers (MPC and PD) and open-loop grasping. Our results show that LeTac-MPC has optimal performance in dynamic and force-interactive tasks and optimal generalizability. We release our code and dataset at https://github.com/ZhengtongXu/LeTac-MPC.more » « less
-
Dynamically Interactive Visualization (DIVI) is a novel approach for orchestrating interactions within and across static visualizations. DIVI deconstructs Scalable Vector Graphics charts at runtime to infer content and coordinate user input, decoupling interaction from specification logic. This decoupling allows interactions to extend and compose freely across different tools, chart types, and analysis goals. DIVI exploits positional relations of marks to detect chart components such as axes and legends, reconstruct scales and view encodings, and infer data fields. DIVI then enumerates candidate transformations across inferred data to perform linking between views. To support dynamic interaction without prior specification, we introduce a taxonomy that formalizes the space of standard interactions by chart element, interaction type, and input event. We demonstrate DIVI's usefulness for rapid data exploration and analysis through a usability study with 13 participants and a diverse gallery of dynamically interactive visualizations, including single chart, multi-view, and cross-tool configurations.more » « less
-
Introduction: Informational graphics and data representations (e.g., charts and figures) are critical for accessing educational content. Novel technologies, such as the multimodal touchscreen which displays audio, haptic, and visual information, are promising for being platforms of diverse means to access digital content. This work evaluated educational graphics rendered on a touchscreen compared to the current standard for accessing graphical content. Method: Three bar charts and geometry figures were evaluated on student ( N = 20) ability to orient to and extract information from the touchscreen and print. Participants explored the graphics and then were administered a set of questions (11–12 depending on graphic group). In addition, participants’ attitudes using the mediums were assessed. Results: Participants performed statistically significantly better on questions assessing information orientation using the touchscreen than print for both bar chart and geometry figures. No statistically significant difference in information extraction ability was found between mediums on either graphic type. Participants responded significantly more favorably to the touchscreen than the print graphics, indicating them as more helpful, interesting, fun, and less confusing. Discussion: Accessing and orienting to information was highly successful by participants using the touchscreen, and was the preferred means of accessing graphical information when compared to the print image for both geometry figures and bar charts. This study highlights challenges in presenting graphics both on touchscreens and in print. Implications for Practitioners: This study offers preliminary support for the use of multimodal, touchscreen tablets as educational tools. Student ability using touchscreen-based graphics seems to be comparable to traditional types of graphics (large print and embossed, tactile graphics), although further investigation may be necessary for tactile graphic users. In summary, educators of students with blindness and visual impairments should consider ways to utilize new technologies, such as touchscreens, to provide more diverse access to graphical information.more » « less
An official website of the United States government

