skip to main content

Title: RoboGraphics: Dynamic Tactile Graphics Powered by Mobile Robots
Tactile graphics are a common way to present information to people with vision impairments. Tactile graphics can be used to explore a broad range of static visual content but aren’t well suited to representing animation or interactivity. We introduce a new approach to creating dynamic tactile graphics that combines a touch screen tablet, static tactile overlays, and small mobile robots. We introduce a prototype system called RoboGraphics and several proof-of-concept applications. We evaluated our prototype with seven participants with varying levels of vision, comparing the RoboGraphics approach to a flat screen, audio-tactile interface. Our results show that dynamic tactile graphics can help visually impaired participants explore data quickly and accurately.
Authors:
; ; ;
Award ID(s):
1652907
Publication Date:
NSF-PAR ID:
10165067
Journal Name:
ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility
Page Range or eLocation-ID:
318 to 328
Sponsoring Org:
National Science Foundation
More Like this
  1. Teachers of the visually impaired (TVIs) regularly present tactile materials (tactile graphics, 3D models, and real objects) to students with vision impairments. Researchers have been increasingly interested in designing tools to support the use of tactile materials, but we still lack an in-depth understanding of how tactile materials are created and used in practice today. To address this gap, we conducted interviews with 21 TVIs and a 3-week diary study with eight of them. We found that tactile materials were regularly used for academic as well as non-academic concepts like tactile literacy, motor ability, and spatial awareness. Real objects and 3D models served as “stepping stones” to tactile graphics and our participants preferred to teach with 3D models, despite finding them difficult to create, obtain, and modify. Use of certain materials also carried social implications; participants selected materials that fostered student independence and allow classroom inclusion. We contribute design considerations, encouraging future work on tactile materials to enable student and TVI co-creation, facilitate rapid prototyping, and promote movement and spatial awareness. To support future research in this area, our paper provides a fundamental understanding of current practices. We bridge these practices to established pedagogical approaches and highlight opportunities for growthmore »regarding this important genre of educational materials.« less
  2. Despite having widespread application in the biomedical sciences, flow cytometers have several limitations that prevent their application to point-of-care (POC) diagnostics in resource-limited environments. 3D printing provides a cost-effective approach to improve the accessibility of POC devices in resource-limited environments. Towards this goal, we introduce a 3D-printed imaging platform (3DPIP) capable of accurately counting particles and perform fluorescence microscopy. In our 3DPIP, captured microscopic images of particle flow are processed on a custom developed particle counter code to provide a particle count. This prototype uses a machine vision-based algorithm to identify particles from captured flow images and is flexible enough to allow for labeled and label-free particle counting. Additionally, the particle counter code returns particle coordinates with respect to time which can further be used to perform particle image velocimetry. These results can help estimate forces acting on particles, and identify and sort different types of cells/particles. We evaluated the performance of this prototype by counting 10 μm polystyrene particles diluted in deionized water at different concentrations and comparing the results with a commercial Beckman-Coulter Z2 particle counter. The 3DPIP can count particle concentrations down to ∼100 particles per mL with a standard deviation of ±20 particles, which is comparablemore »to the results obtained on a commercial particle counter. Our platform produces accurate results at flow rates up to 9 mL h −1 for concentrations below 1000 particle per mL, while 5 mL h −1 produces accurate results above this concentration limit. Aside from performing flow-through experiments, our instrument is capable of performing static experiments that are comparable to a plate reader. In this configuration, our instrument is able to count between 10 and 250 cells per image, depending on the prepared concentration of bacteria samples ( Citrobacter freundii ; ATCC 8090). Overall, this platform represents a first step towards the development of an affordable fully 3D printable imaging flow cytometry instrument for use in resource-limited clinical environments.« less
  3. We increasingly rely on up-to-date, data-driven graphs to understand our environments and make informed decisions. However, many of the methods blind and visually impaired users (BVI) rely on to access data-driven information do not convey important shape-characteristics of graphs, are not refreshable, or are prohibitively expensive. To address these limitations, we introduce two refreshable, 1-DOF audio-haptic interfaces based on haptic cues fundamental to object shape perception. Slide-tone uses finger position with sonification, and Tilt-tone uses fingerpad contact inclination with sonification to provide shape feedback to users. Through formative design workshops (n = 3) and controlled evaluations (n = 8), we found that BVI participants appreciated the additional shape information, versatility, and reinforced understanding these interfaces provide; and that task accuracy was comparable to using interactive tactile graphics or sonification alone. Our research offers insight into the benefits, limitations, and considerations for adopting these haptic cues into a data visualization context.
  4. The most common sensing modalities found in a robot perception system are vision and touch, which together can provide global and highly localized data for manipulation. However, these sensing modalities often fail to adequately capture the behavior of target objects during the critical moments as they transition out of static, controlled contact with an end-effector to dynamic and uncontrolled motion. In this work, we present a novel multimodal visuotactile sensor that provides simultaneous visuotactile and proximity depth data. The sensor integrates an RGB camera and air pressure sensor to sense touch with an infrared time-of-flight (ToF) camera to sense proximity by leveraging a selectively transmissive soft membrane to enable the dual sensing modalities. We present the mechanical design, fabrication techniques, algorithm implementations, and evaluation of the sensor's tactile and proximity modalities. The sensor is demonstrated in three open-loop robotic tasks: approaching and contacting an object, catching, and throwing. The fusion of tactile and proximity data could be used to capture key information about a target object's transition behavior for sensor-based control in dynamic manipulation.
  5. We introduce a system that exploits the screen and front-facing camera of a mobile device to perform three-dimensional deflectometry-based surface measurements. In contrast to current mobile deflectometry systems, our method can capture surfaces with large normal variation and wide field of view (FoV). We achieve this by applying automated multi-view panoramic stitching algorithms to produce a large FoV normal map from a hand-guided capture process without the need for external tracking systems, like robot arms or fiducials. The presented work enables 3D surface measurements of specular objects ’in the wild’ with a system accessible to users with little to no technical imaging experience. We demonstrate high-quality 3D surface measurements without the need for a calibration procedure. We provide experimental results with our prototype Deflectometry system and discuss applications for computer vision tasks such as object detection and recognition.