skip to main content


Title: From the lab to people's home: lessons from accessing blind participants' interactions via smart glasses in remote studies
Researchers have adopted remote methods, such as online surveys and video conferencing, to overcome challenges in conducting in-person usability testing, such as participation, user representation, and safety. However, remote user evaluation on hardware testbeds is limited, especially for blind participants, as such methods restrict access to observations of user interactions. We employ smart glasses in usability testing with blind people and share our lessons from a case study conducted in blind participants’ homes (N=12), where the experimenter can access participants’ activities via dual video conferencing: a third-person view via a laptop camera and a first-person view via smart glasses worn by the participant. We show that smart glasses hold potential for observing participants’ interactions with smartphone testbeds remotely; on average 58.7% of the interactions were fully captured via the first-person view compared to 3.7% via the third-person. However, this gain is not uniform across participants as it is susceptible to head movements orienting the ear towards a sound source, which highlights the need for a more inclusive camera form factor. We also share our lessons learned when it comes to dealing with lack of screen readers, a rapidly draining battery, and Internet connectivity in remote studies with blind participants.  more » « less
Award ID(s):
1816380
NSF-PAR ID:
10344788
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
19th Web for All Conference (W4A’22)
Page Range / eLocation ID:
1 to 11
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Third-person is a popular perspective for video games, but virtual reality (VR) seems to be primarily experienced from a first-person point of view (POV). While a first-person POV generally offers the highest presence; a third-person POV allows users to see their avatar; which allows for a better bond, and the higher vantage point generally increases spatial awareness and navigation. Third-person locomotion is generally implemented using a controller or keyboard, with users often sitting down; an approach that is considered to offer a low presence and embodiment. We present a novel thirdperson locomotion method that enables a high avatar embodiment by integrating skeletal tracking with head-tilt based input to enable omnidirectional navigation beyond the confines of available tracking space. By interpreting movement relative to an avatar, the user will always keep facing the camera which optimizes skeletal tracking and keeps required instrumentation minimal (1 depth camera). A user study compares the performance, usability, VR sickness incidence and avatar embodiment of our method to using a controller for a navigation task that involves interacting with objects. Though a controller offers a higher performance and usability, our locomotion method offered a significantly higher avatar embodiment. 
    more » « less
  2. Abstract Background Advances in biologging technology allow researchers access to previously unobservable behavioral states and movement patterns of marine animals. To relate behaviors with environmental variables, features must be evaluated at scales relevant to the animal or behavior. Remotely sensed environmental data, collected via satellites, often suffers from the effects of cloud cover and lacks the spatial or temporal resolution to adequately link with individual animal behaviors or behavioral bouts. This study establishes a new method for remotely and continuously quantifying surface ice concentration (SIC) at a scale relevant to individual whales using on-animal tag video data. Results Motion-sensing and video-recording suction cup tags were deployed on 7 Antarctic minke whales ( Balaenoptera bonaerensis ) around the Antarctic Peninsula in February and March of 2018. To compare the scale of camera-tag observations with satellite imagery, the area of view was simulated using camera-tag parameters. For expected conditions, we found the visible area maximum to be ~ 100m 2 which indicates that observations occur at an equivalent or finer scale than a single pixel of high-resolution visible spectrum satellite imagery. SIC was classified into one of six bins (0%, 1–20%, 21–40%, 41–60%, 61–80%, 81–100%) by two independent observers for the initial and final surfacing between dives. In the event of a disagreement, a third independent observer was introduced, and the median of the three observer’s values was used. Initial results ( n  = 6) show that Antarctic minke whales in the coastal bays of the Antarctic Peninsula spend 52% of their time in open water, and only 15% of their time in water with SIC greater than 20%. Over time, we find significant variation in observed SIC, indicating that Antarctic minke occupy an extremely dynamic environment. Sentinel-2 satellite-based approaches of sea ice assessment were not possible because of persistent cloud cover during the study period. Conclusion Tag-video offers a means to evaluate ice concentration at spatial and temporal scales relevant to the individual. Combined with information on underwater behavior, our ability to quantify SIC continuously at the scale of the animal will improve upon current remote sensing methods to understand the link between animal behavior and these dynamic environmental variables. 
    more » « less
  3. Abstract

    As technology (particularly smartphone and computer technology) has advanced, sociolinguistic methodology has likewise adapted to include remote data collection. Remote methods range from approximating the traditional sociolinguistic interview via synchronous video conferencing to developing new methods for asynchronous self‐recording (Boyd et al., 2015; Leeman et al., 2020). In this paper, we take a close look at the question prompts sent to participants in an asynchronous, remote self‐recording project (“MI Diaries”). We discuss how some of the techniques initially developed for obtaining a range of styles in a traditional in‐person sociolinguistic interview can be fruitfully adapted to a remote context. Of this range of styles, we give particular focus toNarratives of Personal Experience(Labov & Waletzky, 1967), and provide an analysis of how the theme, style, and development of prompts can encourage narratives from participants. We end with a short discussion of prompts that have successfully elicited other speech styles, and prompts that are especially fruitful with child participants.

     
    more » « less
  4. The culture within engineering colleges and departments has been historically quiet when considering social justice issues. Often the faculty in those departments are less concerned with social issues and are primarily focused on their disciplines and the concrete ways that they can make impacts academically and professionally in their respective arena’s. However, with the social climate of the United States shifting ever more towards a politically charged climate, and current events, particularly the protests against police brutality in recent years, faculty and students are constantly inundated with news of injustices happening in our society. The murder of George Floyd on May 25th 2020 sent shockwaves across the United States and the world. The video captured of his death shared across the globe brought everyone’s attention to the glaringly ugly problem of police brutality, paired with the COVID-19 pandemic, and US election year, the conditions were just right for a social activist movement to grow to a size that no one could ignore. Emmanuel Acho spoke out, motivated by injustices seen in the George Floyd murder, initially with podcasts and then by writing his book “Uncomfortable Converstations with a Black Man” [1]. In his book he touched on various social justice issues such as: racial terminology (i.e., Black or African American), implicit biases, white privilege, cultural appropriation, stereotypes (e.g., the “angry black man”), racial slurs (particularly the n-word), systemic racism, the myth of reverse racism, the criminal justice system, the struggles faced by black families, interracial families, allyship, and anti-racism. Students and faculty at Anonymous University felt compelled to set aside the time to meet and discuss this book in depth through the video conferencing client Zoom. In these meetings diverse facilitators were tasked with bringing the topics discussed by Acho in his book into conversation and pushing attendees of these meetings to consider those topics critically and personally. In an effort to avoid tasking attendees with reading homework to be able to participate in these discussions, the discussed chapter of the audiobook version of Acho’s book was played at the beginning of each meeting. Each audiobook chapter lasted between fifteen and twenty minutes, after which forty to forty-five minutes were left in the hour-long meetings to discuss the content of the chapter in question. Efforts by students and faculty were made to examine how some of the teachings of the book could be implemented into their lives and at Anonymous University. For broader topics, they would relate the content back to their personal lives (e.g., raising their children to be anti-racist and their experiences with racism in American and international cultures). Each meeting was recorded for posterity in the event that those conversations would be used in a paper such as this. Each meeting had at least one facilitator whose main role was to provide discussion prompts based on the chapter and ensure that the meeting environment was safe and inclusive. Naturally, some chapters address topics that are highly personal to some participants, so it was vital that all participants felt comfortable and supported to share their thoughts and experiences. The facilitator would intervene if the conversation veered in an aggressive direction. For example, if a participant starts an argument with another participant in a non-constructive manner, e.g., arguing over the definition of ethnicity, then the facilitator will interrupt, clear the air to bring the group back to a common ground, and then continue the discussion. Otherwise, participants were allowed to steer the direction of the conversation as new avenues of discussion popped up. These meetings were recorded with the goal of returning to these conversations and analyzing the conversations between attendees. Grounded theory will be used to first assess the most prominent themes of discussion between attendees for each meeting [2]. Attendees will be contacted to expressly ask their permission to have their words and thoughts used in this work, and upon agreement that data will begin to be processed. Select attendees will be asked to participate in focus group discussions, which will also be recorded via Zoom. These discussions will focus around the themes pulled from general discussion and will aim to dive deeper into the impact that this experience has had on them as either students or faculty members. A set of questions will be developed as prompts, but conversation is expected to evolve organically as these focus groups interact. These sessions will be scheduled for an hour, and a set of four focus groups with four participants are expected to participate for a total of sixteen total focus group participants. We hope to uncover how this experience changed the lives of the participants and present a model of how conversations such as this can promote diversity, equity, inclusion, and access activities amongst faculty and students outside of formal programs and strategic plans that are implemented at university, college, or departmental levels. 
    more » « less
  5. Online discussion forums have become an integral component of news, entertainment, information, and video-streaming websites, where people all over the world actively engage in discussions on a wide range of topics including politics, sports, music, business, health, and world affairs. Yet, little is known about their usability for blind users, who aurally interact with the forum conversations using screen reader assistive technology. In an interview study, blind users stated that they often had an arduous and frustrating interaction experience while consuming conversation threads, mainly due to the highly redundant content and the absence of customization options to selectively view portions of the conversations. As an initial step towards addressing these usability concerns, we designed PView - a browser extension that enables blind users to customize the content of forum threads in real time as they interact with these threads. Specifically, PView allows the blind users to explicitly hide any post that is irrelevant to them, and then PView automatically detects and filters out all subsequent posts that are substantially similar to the hidden post in real time, before the users navigate to those portions of the thread. In a user study with blind participants, we observed that compared to the status quo, PView significantly improved the usability, workload, and satisfaction of the participants while interacting with the forums.

     
    more » « less