skip to main content


Title: An Independent and Interactive Museum Experience for Blind People
Museums are gradually becoming more accessible to blind people, who have shown interest in visiting museums and in appreciating visual art. Yet, their ability to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. Based on this observation and on prior research, we developed a solution to support an independent, interactive museum experience that uses the continuous tracking of the user’s location and orientation to enable a seamless interaction between Navigation and Art Appreciation. Accurate localization and context-awareness allow for turn-by-turn guidance (Navigation Mode), as well as detailed audio content when facing an artwork within close proximity (Art Appreciation Mode). In order to evaluate our system, we installed it at The Andy Warhol Museum in Pittsburgh and conducted a user study where nine blind participants followed routes of interest while learning about the artworks. We found that all participants were able to follow the intended path, immediately grasped how to switch between Navigation and Art Appreciation modes, and valued listening to the audio content in front of each artwork. Also, they showed high satisfaction and an increased motivation to visit museums more often  more » « less
Award ID(s):
1637927
NSF-PAR ID:
10308748
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  
Date Published:
Journal Name:
Proceedings of the 16th International Web for All Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Visual qualitative methodologies enhance the richness of data and makes participants experts on the object of interest. Visual data brings another dimension to the evaluation process, besides surveys and interviews, as well as depth and breadth to participants reactions to specific program activities. Visual data consists of images, such as photos, drawings, artwork, among others. Exploring a different approach to assess impact of an educational activity, an exercise was designed where participants were asked to take photos to document a site visit to an area impacted by a swarm of earthquakes in 2019. The exercise required taking five photos of either objects, persons, scenery, structures, or any other thing that captured their attention during the visit and write a reflective essay to answer three questions: 1) How do these photos represent your site visit experience? 2) Based on the content of your photos, write about what you learned, discovered, new knowledge acquired, emotions, changes in your way of thinking, etc., and 3) What did you learned or discovered from doing this exercise? Twenty-two undergraduate engineering and architecture students from the RISE-UP Program, enrolled in a curricular sequence in design and construction of resilient and sustainable structures, completed the exercise. Analyses of obtained data includes frequency of captured images and content analysis of reflective essays to determine instances where each of the four proposed learning objectives was present. Results show that across essays, 32% of the essays include text that demonstrate impact related to the first objective, 59% for the second, 73% for the third, and 86% for the fourth objective. Forty-five percent of essays included text considered relevant but not related to an objective. Personal, social, and career insights were categorized as unintended results. Photos taken by students represent what they considered relevant during the visit and also evidence the achievement of the proposed learning objectives. In general, three mayor categories emerged from the content in photos: 1) photos related to the design and construction of the structure and specific damage observed from earthquakes; 2) photos of classmates, professors, and group activities; and 3) other photos that do not share a theme. Both photos and essays demonstrate that the learning objectives were successfully achieved and encourage the use of visual data as an alternative for the evaluation of educational activities. 
    more » « less
  2. Web data items such as shopping products, classifieds, and job listings are indispensable components of most e-commerce websites. The information on the data items are typically distributed over two or more webpages, e.g., a ‘Query-Results’ page showing the summaries of the items, and ‘Details’ pages containing full information about the items. While this organization of data mitigates information overload and visual cluttering for sighted users, it however increases the interaction overhead and effort for blind users, as back-and-forth navigation between webpages using screen reader assistive technology is tedious and cumbersome. Existing usability-enhancing solutions are unable to provide adequate support in this regard as they predominantly focus on enabling efficient content access within a single webpage, and as such are not tailored for content distributed across multiple webpages. As an initial step towards addressing this issue, we developed AutoDesc, a browser extension that leverages a custom extraction model to automatically detect and pull out additional item descriptions from the ‘details’ pages, and then proactively inject the extracted information into the ‘Query-Results’ page, thereby reducing the amount of back-and-forth screen reader navigation between the two webpages. In a study with 16 blind users, we observed that within the same time duration, the participants were able to peruse significantly more data items on average with AutoDesc, compared to that with their preferred screen readers as well as with a state-of-the-art solution. 
    more » « less
  3. Navigation assistive technologies have been designed to support individuals with visual impairments during independent mobility by providing sensory augmentation and contextual awareness of their surroundings. Such information is habitually provided through predefned audio-haptic interaction paradigms. However, individual capabilities, preferences and behavior of people with visual impairments are heterogeneous, and may change due to experience, context and necessity. Therefore, the circumstances and modalities for providing navigation assistance need to be personalized to different users, and through time for each user. We conduct a study with 13 blind participants to explore how the desirability of messages provided during assisted navigation varies based on users' navigation preferences and expertise. The participants are guided through two different routes, one without prior knowledge and one previously studied and traversed. The guidance is provided through turn-by-turn instructions, enriched with contextual information about the environment. During navigation and follow-up interviews, we uncover that participants have diversifed needs for navigation instructions based on their abilities and preferences. Our study motivates the design of future navigation systems capable of verbosity level personalization in order to keep the users engaged in the current situational context while minimizing distractions. 
    more » « less
  4. null (Ed.)
    Informal science learning spaces such as museums have been exploring the potential of Augmented Reality (AR) as a means to connect visitors to places, times, or types of content that are otherwise inaccessible. This proposal reports on a design-based research project conducted at La Brea Tar Pits, an active paleontological dig site located within a city park in the heart of Los Angeles. The Natural History Museums of Los Angeles County and the University of Southern California engaged in a research practice partnership to enhance place-based science learning through the design and iterative testing of potential AR exhibits. Results from one implementation show that AR technology increased visitor interest in the park and positive emotions around science content. Significant learning gains and decreases in science misconceptions also occurred for participants. We also give guidance on developing scientifically accurate assets for AR experiences and leading users through a virtual narrative. This presentation offers insights into museum and university partnerships for promoting public understanding of science in informal spaces by leveraging place-based learning through technology-enhanced engagement. https://mw21.museweb.net/proposal/tar-ar-bringing-the-past-to-life-in-place-based-augmented-reality-science-learning/ 
    more » « less
  5. Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and as such assistive technologies used by BVI users, i.e., screen reader and screen magnifier, are not geared for efficient interaction with such scattered content. Specifically, for blind screen reader users, content navigation is predominantly one-dimensional despite the support for skipping content, and therefore navigating to-and-fro between different parts of the webpage is tedious and frustrating. Similarly, low vision screen magnifier users have to continuously pan back-and-forth between different portions of a webpage, given that only a portion of the screen is viewable at any instant due to content enlargement. The extant techniques to overcome inefficient web interaction for BVI users have mostly focused on general web-browsing activities, and as such they provide little to no support for data record-specific interaction activities such as filtering and sorting – activities that are equally important for facilitating quick and easy access to desired data records. To fill this void, we present InSupport, a browser extension that: (i) employs custom machine learning-based algorithms to automatically extract auxiliary segments on any webpage containing data records; and (ii) provides an instantly accessible proxy one-stop interface for easily navigating the extracted auxiliary segments using either basic keyboard shortcuts or mouse actions. Evaluation studies with 14 blind participants and 16 low vision participants showed significant improvement in web usability with InSupport, driven by increased reduction in interaction time and user effort, compared to the state-of-the-art solutions. 
    more » « less