%AHong, Jonggi%AGandhi, Jaina%AEssuah Mensah, Ernest%AZeraati, Farnaz%AJarjue, Ebrima%ALee, K.%AKacorri, Hernisa%D2022%I %K %MOSTI ID: 10344780 %PMedium: X %TBlind Users Accessing Their Training Images in Teachable Object Recognizers %XTeachable object recognizers provide a solution for a very practical need for blind people – instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants’ homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load. Country unknown/Code not availablehttps://doi.org/10.1145/3517428.3544824OSTI-MSA