skip to main content


Title: RoboGraphics: Dynamic Tactile Graphics Powered by Mobile Robots
Tactile graphics are a common way to present information to people with vision impairments. Tactile graphics can be used to explore a broad range of static visual content but aren’t well suited to representing animation or interactivity. We introduce a new approach to creating dynamic tactile graphics that combines a touch screen tablet, static tactile overlays, and small mobile robots. We introduce a prototype system called RoboGraphics and several proof-of-concept applications. We evaluated our prototype with seven participants with varying levels of vision, comparing the RoboGraphics approach to a flat screen, audio-tactile interface. Our results show that dynamic tactile graphics can help visually impaired participants explore data quickly and accurately.  more » « less
Award ID(s):
1652907
NSF-PAR ID:
10165067
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility
Page Range / eLocation ID:
318 to 328
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Teachers of the visually impaired (TVIs) regularly present tactile materials (tactile graphics, 3D models, and real objects) to students with vision impairments. Researchers have been increasingly interested in designing tools to support the use of tactile materials, but we still lack an in-depth understanding of how tactile materials are created and used in practice today. To address this gap, we conducted interviews with 21 TVIs and a 3-week diary study with eight of them. We found that tactile materials were regularly used for academic as well as non-academic concepts like tactile literacy, motor ability, and spatial awareness. Real objects and 3D models served as “stepping stones” to tactile graphics and our participants preferred to teach with 3D models, despite finding them difficult to create, obtain, and modify. Use of certain materials also carried social implications; participants selected materials that fostered student independence and allow classroom inclusion. We contribute design considerations, encouraging future work on tactile materials to enable student and TVI co-creation, facilitate rapid prototyping, and promote movement and spatial awareness. To support future research in this area, our paper provides a fundamental understanding of current practices. We bridge these practices to established pedagogical approaches and highlight opportunities for growth regarding this important genre of educational materials. 
    more » « less
  2. Dynamically Interactive Visualization (DIVI) is a novel approach for orchestrating interactions within and across static visualizations. DIVI deconstructs Scalable Vector Graphics charts at runtime to infer content and coordinate user input, decoupling interaction from specification logic. This decoupling allows interactions to extend and compose freely across different tools, chart types, and analysis goals. DIVI exploits positional relations of marks to detect chart components such as axes and legends, reconstruct scales and view encodings, and infer data fields. DIVI then enumerates candidate transformations across inferred data to perform linking between views. To support dynamic interaction without prior specification, we introduce a taxonomy that formalizes the space of standard interactions by chart element, interaction type, and input event. We demonstrate DIVI's usefulness for rapid data exploration and analysis through a usability study with 13 participants and a diverse gallery of dynamically interactive visualizations, including single chart, multi-view, and cross-tool configurations. 
    more » « less
  3. null (Ed.)
    Despite having widespread application in the biomedical sciences, flow cytometers have several limitations that prevent their application to point-of-care (POC) diagnostics in resource-limited environments. 3D printing provides a cost-effective approach to improve the accessibility of POC devices in resource-limited environments. Towards this goal, we introduce a 3D-printed imaging platform (3DPIP) capable of accurately counting particles and perform fluorescence microscopy. In our 3DPIP, captured microscopic images of particle flow are processed on a custom developed particle counter code to provide a particle count. This prototype uses a machine vision-based algorithm to identify particles from captured flow images and is flexible enough to allow for labeled and label-free particle counting. Additionally, the particle counter code returns particle coordinates with respect to time which can further be used to perform particle image velocimetry. These results can help estimate forces acting on particles, and identify and sort different types of cells/particles. We evaluated the performance of this prototype by counting 10 μm polystyrene particles diluted in deionized water at different concentrations and comparing the results with a commercial Beckman-Coulter Z2 particle counter. The 3DPIP can count particle concentrations down to ∼100 particles per mL with a standard deviation of ±20 particles, which is comparable to the results obtained on a commercial particle counter. Our platform produces accurate results at flow rates up to 9 mL h −1 for concentrations below 1000 particle per mL, while 5 mL h −1 produces accurate results above this concentration limit. Aside from performing flow-through experiments, our instrument is capable of performing static experiments that are comparable to a plate reader. In this configuration, our instrument is able to count between 10 and 250 cells per image, depending on the prepared concentration of bacteria samples ( Citrobacter freundii ; ATCC 8090). Overall, this platform represents a first step towards the development of an affordable fully 3D printable imaging flow cytometry instrument for use in resource-limited clinical environments. 
    more » « less
  4. Lagrangian/Eulerian hybrid strand-based hair simulation techniques have quickly become a popular approach in VFX and real-time graphics applications. With Lagrangian hair dynamics, the inter-hair contacts are resolved in the Eulerian grid using the continuum method, i.e., the MPM scheme with the granular Drucker-Prager rheology, to avoid expensive collision detection and handling. This fuzzy collision handling makes the authoring process significantly easier. However, although current hair grooming tools provide a wide range of strand-based modeling tools for this simulation approach, the crucial sag-free initialization functionality remains often ignored. Thus, when the simulation starts, gravity would cause any artistic hairstyle to sag and deform into unintended and undesirable shapes. This paper proposes a novel four-stage sag-free initialization framework to solve stable quasistatic configurations for hybrid strand-based hair dynamic systems. These four stages are split into two global-local pairs. The first one ensures static equilibrium at every Eulerian grid node with additional inequality constraints to prevent stress from exiting the yielding surface. We then derive several associated closed-form solutions in the local stage to compute segment rest lengths, orientations, and particle deformation gradients in parallel. The second global-local step solves along each hair strand to ensure all the bend and twist constraints produce zero net torque on every hair segment, followed by a local step to adjust the rest Darboux vectors to a unit quaternion. We also introduce an essential modification for the Darboux vector to eliminate the ambiguity of the Cosserat rod rest pose in both initialization and simulation. We evaluate our method on a wide range of hairstyles, and our approach can only take a few seconds to minutes to get the rest quasistatic configurations for hundreds of hair strands. Our results show that our method successfully prevents sagging and has minimal impact on the hair motion during simulation. 
    more » « less
  5. We increasingly rely on up-to-date, data-driven graphs to understand our environments and make informed decisions. However, many of the methods blind and visually impaired users (BVI) rely on to access data-driven information do not convey important shape-characteristics of graphs, are not refreshable, or are prohibitively expensive. To address these limitations, we introduce two refreshable, 1-DOF audio-haptic interfaces based on haptic cues fundamental to object shape perception. Slide-tone uses finger position with sonification, and Tilt-tone uses fingerpad contact inclination with sonification to provide shape feedback to users. Through formative design workshops (n = 3) and controlled evaluations (n = 8), we found that BVI participants appreciated the additional shape information, versatility, and reinforced understanding these interfaces provide; and that task accuracy was comparable to using interactive tactile graphics or sonification alone. Our research offers insight into the benefits, limitations, and considerations for adopting these haptic cues into a data visualization context. 
    more » « less