Smart home cameras present new challenges for understanding behaviors and relationships surrounding always-on, domestic recording systems. We designed a series of discursive activities involving 16 individuals from ten households for six weeks in their everyday settings. These activities functioned as speculative probes prompting participants to reflect on themes of privacy and power through filming with cameras in their households. Our research design foregrounded critical-playful enactments that allowed participants to speculate potentials for relationships with cameras in the home beyond everyday use. We present four key dynamics with participants and home cameras by examining their relationships to: the camera’s eye, filming, their data, and camera’s societal contexts. We contribute discussions about the mundane, information privacy, and post-hoc reflection with one’s camera footage. Overall, our findings reveal the camera as a strange, yet banal entity in the home—interrogating how participants compose and handle their own and others’ video data.
more »
« less
Whose Video?: Surveying Implications for Participants Engagement in Video Recording Practices in Ethnographic Research
This symposium aims to build on the argument for viewing video recording as theory (Hall, 2000) by focusing on instances when participants intentionally engage with ongoing recording, move/interact with recording equipment, and (re)purpose video records. All four papers use example interactions to highlight how participants reorient data collection and use, reorganizing control over how their stories are recorded, shared, and analyzed in the future; we argue that these moves are attempts to further relationship building, countering the surveillance technologies cameras have become (Vossoughi & Escude, 2016). We discuss further the methodological implications for future research, asking video recording as whose theory?
more »
« less
- Award ID(s):
- 1742257
- PAR ID:
- 10202103
- Editor(s):
- Gresalfi, M.S.
- Date Published:
- Journal Name:
- The Interdisciplinarity of the Learning Sciences, 14th International Conference of the Learning Sciences (ICLS) 2020
- Volume:
- 1
- Page Range / eLocation ID:
- 414-421
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Paper presented at the 2021 Annual Conference of the American Educational Research Association, as part of the symposium Opportunities and Challenges Associated with Approximations of Practice in Teacher Education Programs In 1964, Canadian philosopher Marshall McLuhan proposed a then-radical idea in media theory: We should study not only content of messages, but also the impact of the medium itself, on society. He captured this idea in the phrase: The medium is the message. We take this idea into the study of teachers’ approximations of practice. We ask: Can the creation medium impact teachers’ approximation of practice, especially their recomposition of practice? The purpose of the study reported in this paper is to investigate differences in how prospective teachers may decompose and recompose practice during video and written approximations of practice. By written approximation of practice, we refer to a response to an assignment such as producing (parts of) a lesson plan; by video approximation of practice, we refer to a response to an assignment such as video recording oneself responding to a (hypothetical) student.more » « less
-
Analyzing dance moves and routines is a foundational step in learning dance. Videos are often utilized at this step, and advancements in machine learning, particularly in human-movement recognition, could further assist dance learners. We developed and evaluated a Wizard-of-Oz prototype of a video comprehension tool that offers automatic in-situ dance move identification functionality. Our system design was informed by an interview study involving 12 dancers to understand the challenges they face when trying to comprehend complex dance videos and taking notes. Subsequently, we conducted a within-subject study with 8 Cuban salsa dancers to identify the benefits of our system compared to an existing traditional feature-based search system. We found that the quality of notes taken by participants improved when using our tool, and they reported a lower workload. Based on participants’ interactions with our system, we offer recommendations on how an AI-powered span-search feature can enhance dance video comprehension tools.more » « less
-
Introduction This dataset was gathered during the Vid2Real online video-based study, which investigates humans’ perception of robots' intelligence in the context of an incidental Human-Robot encounter. The dataset contains participants' questionnaire responses to four video study conditions, namely Baseline, Verbal, Body language, and Body language + Verbal. The videos depict a scenario where a pedestrian incidentally encounters a quadruped robot trying to enter a building. The robot uses verbal commands or body language to try to ask for help from the pedestrian in different study conditions. The differences in the conditions were manipulated using the robot’s verbal and expressive movement functionalities. Dataset Purpose The dataset includes the responses of human subjects about the robots' social intelligence used to validate the hypothesis that robot social intelligence is positively correlated with human compliance in an incidental human-robot encounter context. The video based dataset was also developed to obtain empirical evidence that can be used to design future real-world HRI studies. Dataset Contents Four videos, each corresponding to a study condition. Four sets of Perceived Social Intelligence Scale data. Each set corresponds to one study condition Four sets of compliance likelihood questions, each set include one Likert question and one free-form question One set of Godspeed questionnaire data. One set of Anthropomorphism questionnaire data. A csv file containing the participants demographic data, Likert scale data, and text responses. A data dictionary explaining the meaning of each of the fields in the csv file. Study Conditions There are 4 videos (i.e. study conditions), the video scenarios are as follows. Baseline: The robot walks up to the entrance and waits for the pedestrian to open the door without any additional behaviors. This is also the "control" condition. Verbal: The robot walks up to the entrance, and says ”can you please open the door for me” to the pedestrian while facing the same direction, then waits for the pedestrian to open the door. Body Language: The robot walks up to the entrance, turns its head to look at the pedestrian, then turns its head to face the door, and waits for the pedestrian to open the door. Body Language + Verbal: The robot walks up to the entrance, turns its head to look at the pedestrian, and says ”Can you open the door for me” to the pedestrian, then waits for the pedestrian to open the door. Image showing the Verbal condition. Image showing the Body Language condition. A within-subject design was adopted, and all participants experienced all conditions. The order of the videos, as well as the PSI scales, were randomized. After receiving consent from the participants, they were presented with one video, followed by the PSI questions and the two exploratory questions (compliance likelihood) described above. This set was repeated 4 times, after which the participants would answer their general perceptions of the robot with Godspeed and AMPH questionnaires. Each video was around 20 seconds and the total study time was around 10 minutes. Video as a Study Method A video-based study in human-robot interaction research is a common method for data collection. Videos can easily be distributed via online participant recruiting platforms, and can reach a larger sample than in-person/lab-based studies. Therefore, it is a fast and easy method for data collection for research aiming to obtain empirical evidence. Video Filming The videos were filmed with a first-person point-of-view in order to maximize the alignment of video and real-world settings. The device used for the recording was an iPhone 12 pro, and the videos were shot in 4k 60 fps. For better accessibility, the videos have been converted to lower resolutions. Instruments The questionnaires used in the study include the Perceived Social Intelligence Scale (PSI), Godspeed Questionnaire, and Anthropomorphism Questionnaire (AMPH). In addition to these questionnaires, a 5-point Likert question and a free-text response measuring human compliance were added for the purpose of the video-based study. Participant demographic data was also collected. Questionnaire items are attached as part of this dataset. Human Subjects For the purpose of this project, the participants are recruited through Prolific. Therefore, the participants are users of Prolific. Additionally, they are restricted to people who are currently living in the United States, fluent in English, and have no hearing or visual impairments. No other restrictions were imposed. Among the 385 participants, 194 participants identified as female, and 191 as male, the age ranged from 19 to 75 (M = 38.53, SD = 12.86). Human subjects remained anonymous. Participants were compensated with $4 upon submission approval. This study was reviewed and approved by UT Austin Internal Review Board. Robot The dataset contains data about humans’ perceived social intelligence of a Boston Dynamics’ quadruped robot Spot (Explorer model). The robot was selected because quadruped robots are gradually being adopted to provide services such as delivery, surveillance, and rescue. However, there are still issues or obstacles that robots cannot easily overcome by themselves in which they will have to ask for help from nearby humans. Therefore, it is important to understand how humans react to a quadruped robot that they incidentally encounter. For the purposes of this video-study, the robot operation was semi-autonomous, with the navigation being manually teleoperated by an operator and a few standalone autonomous modules to supplement it. Data Collection The data was collected through Qualtrics, a survey development platform. After the completion of data collection, the data was downloaded as a csv file. Data Quality Control Qualtrics automatically detects bots so any response that is flagged as bots are discarded. All incomplete and duplicate responses were discarded. Data Usage This dataset can be used to conduct a meta-analysis on robots' perceived intelligence. Please note that data is coupled with this study design. Users interested in data reuse will have to assess that this dataset is in line with their study design. Acknowledgement This study was funded through the NSF Award # 2219236GCR: Community-Embedded Robotics: Understanding Sociotechnical Interactions with Long-term Autonomous Deployments.more » « less
-
Learners' awareness of their own affective states (emotions) can improve their meta-cognition, which is a critical skill of being aware of and controlling one's cognitive, motivational, and affect, and adjusting their learning strategies and behaviors accordingly. To investigate the effect of peers' affects on learners' meta-cognition, we proposed two types of cues that aggregated peers' affects that were recognized via facial expression recognition:Locative cues (displaying the spikes of peers' emotions along a video timeline) andTemporal cues (showing the positivities of peers' emotions at different segments of a video). We conducted a between-subject experiment with 42 college students through the use of think-aloud protocols, interviews, and surveys. Our results showed that the two types of cues improved participants' meta-cognition differently. For example, interacting with theTemporal cues triggered the participants to compare their own affective responses with their peers and reflect more on why and how they had different emotions with the same video content. While the participants perceived the benefits of using AI-generated peers' cues to improve their awareness of their own learning affects, they also sought more explanations from their peers to understand the AI-generated results. Our findings not only provide novel design implications for promoting learners' meta-cognition with privacy-preserved social cues of peers' learning affects, but also suggest an expanded design framework for Explainable AI (XAI).more » « less
An official website of the United States government

