skip to main content


Title: Weaving by Touch: A Case Analysis of Accessible Making
The rise of maker communities and fabrication tools creates new opportunities for participation in design work. With this has come an interest in increasing the accessibility of making for people with disabilities, which has mainly emphasized independence and empowerment through the creation of more accessible fabrication tools. To understand and rethink the notion of accessible making, we analyze the context and practices of a particular site of making: the communal weaving studio within an assisted living facility for people with vision impairments. Our analysis helps reconsider the material and social processes that constitute accessible making, including the ways makers attend to interactive material properties, negotiate co-creative embodied work, and value the labor of making. We discuss future directions for design and research on accessible making while highlighting tensions around assistance, collaboration, and how disabled labor is valued.  more » « less
Award ID(s):
1901456
NSF-PAR ID:
10180389
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 15
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract  
    more » « less
  2. Background: Personal health technologies, including wearable tracking devices and mobile apps, have great potential to equip the general population with the ability to monitor and manage their health. However, being designed for sighted people, much of their functionality is largely inaccessible to the blind and low-vision (BLV) population, threatening the equitable access to personal health data (PHD) and health care services. Objective: This study aims to understand why and how BLV people collect and use their PHD and the obstacles they face in doing so. Such knowledge can inform accessibility researchers and technology companies of the unique self-tracking needs and accessibility challenges that BLV people experience. Methods: We conducted a web-based and phone survey with 156 BLV people. We reported on quantitative and qualitative findings regarding their PHD tracking practices, needs, accessibility barriers, and work-arounds. Results: BLV respondents had strong desires and needs to track PHD, and many of them were already tracking their data despite many hurdles. Popular tracking items (ie, exercise, weight, sleep, and food) and the reasons for tracking were similar to those of sighted people. BLV people, however, face many accessibility challenges throughout all phases of self-tracking, from identifying tracking tools to reviewing data. The main barriers our respondents experienced included suboptimal tracking experiences and insufficient benefits against the extended burden for BLV people. Conclusions: We reported the findings that contribute to an in-depth understanding of BLV people’s motivations for PHD tracking, tracking practices, challenges, and work-arounds. Our findings suggest that various accessibility challenges hinder BLV individuals from effectively gaining the benefits of self-tracking technologies. On the basis of the findings, we discussed design opportunities and research areas to focus on making PHD tracking technologies accessible for all, including BLV people. 
    more » « less
  3. null (Ed.)
    Despite the promise of the maker movement as empowering individuals and democratizing design, people with disabilities still face many barriers to participation. Recent work has highlighted the inaccessible nature of making and introduced more accessible maker technologies, practices, and workspaces. One less explored area of accessible making involves supporting more traditional forms of craftwork, such as weaving and fiber arts. The present study reports an analysis of existing practices at a weaving studio within a residential community for people with vision impairments and explores the creation of an audio-enhanced loom to support this practice. Our iterative design process began with 60 hours of field observations at the weaving studio, complemented by 15 interviews with residents and instructors at the community. These insights informed the design of Melodie, an interactive floor loom that senses and provides audio feedback during weaving. Our design exploration of Melodie revealed four scenarios of use among this community: promoting learning among novice weavers, raising awareness of system state, enhancing the aesthetics of weaving, and supporting artistic performance. We identify recommendations for designing audio-enhanced technologies that promote accessible crafting and reflect on the role of technology in predominantly manual craftwork. 
    more » « less
  4. Identifying people in photographs is a critical task in a wide variety of domains, from national security [7] to journalism [14] to human rights investigations [1]. The task is also fundamentally complex and challenging. With the world population at 7.6 billion and growing, the candidate pool is large. Studies of human face recognition ability show that the average person incorrectly identifies two people as similar 20–30% of the time, and trained police detectives do not perform significantly better [11]. Computer vision-based face recognition tools have gained considerable ground and are now widely available commercially, but comparisons to human performance show mixed results at best [2,10,16]. Automated face recognition techniques, while powerful, also have constraints that may be impractical for many real-world contexts. For example, face recognition systems tend to suffer when the target image or reference images have poor quality or resolution, as blemishes or discolorations may be incorrectly recognized as false positives for facial landmarks. Additionally, most face recognition systems ignore some salient facial features, like scars or other skin characteristics, as well as distinctive non-facial features, like ear shape or hair or facial hair styles. This project investigates how we can overcome these limitations to support person identification tasks. By adjusting confidence thresholds, users of face recognition can generally expect high recall (few false negatives) at the cost of low precision (many false positives). Therefore, we focus our work on the “last mile” of person identification, i.e., helping a user find the correct match among a large set of similarlooking candidates suggested by face recognition. Our approach leverages the powerful capabilities of the human vision system and collaborative sensemaking via crowdsourcing to augment the complementary strengths of automatic face recognition. The result is a novel technology pipeline combining collective intelligence and computer vision. We scope this project to focus on identifying soldiers in photos from the American Civil War era (1861– 1865). An estimated 4,000,000 soldiers fought in the war, and most were photographed at least once, due to decreasing costs, the increasing robustness of the format, and the critical events separating friends and family [17]. Over 150 years later, the identities of most of these portraits have been lost, but as museums and archives increasingly digitize and publish their collections online, the pool of reference photos and information has never been more accessible. Historians, genealogists, and collectors work tirelessly to connect names with faces, using largely manual identification methods [3,9]. Identifying people in historical photos is important for preserving material culture [9], correcting the historical record [13], and recognizing contributions of marginalized groups [4], among other reasons. 
    more » « less
  5. Despite the phenomenal advances in the computational power and functionality of electronic systems, human-machine interaction has largely been limited to simple control panels, keyboard, mouse and display. Consequently, these systems either rely critically on close human guidance or operate almost independently from the user. An exemplar technology integrated tightly into our lives is the smartphone. However, the term “smart” is a misnomer, since it has fundamentally no intelligence to understand its user. The users still have to type, touch or speak (to some extent) to express their intentions in a form accessible to the phone. Hence, intelligent decision making is still almost entirely a human task. A life-changing experience can be achieved by transforming machines from passive tools to agents capable of understanding human physiology and what their user wants [1]. This can advance human capabilities in unimagined ways by building a symbiotic relationship to solve real world problems cooperatively. One of the high-impact application areas of this approach is assistive internet of things (IoT) technologies for physically challenged individuals. The Annual World Report on Disability reveals that 15% of the world population lives with disability, while 110 to 190 million of these people have difficulty in functioning [1]. Quality of life for this population can improve significantly if we can provide accessibility to smart devices, which provide sensory inputs and assist with everyday tasks. This work demonstrates that smart IoT devices open up the possibility to alleviate the burden on the user by equipping everyday objects, such as a wheelchair, with decision-making capabilities. Moving part of the intelligent decision making to smart IoT objects requires a robust mechanism for human-machine communication (HMC). To address this challenge, we present examples of multimodal HMC mechanisms, where the modalities are electroencephalogram (EEG), speech commands, and motion sensing. We also introduce an IoT co-simulation framework developed using a network simulator (OMNeT++) and a robot simulation platform Virtual Robot Experimentation Platform (V-REP). We show how this framework is used to evaluate the effectiveness of different HMC strategies using automated indoor navigation as a driver application. 
    more » « less