skip to main content

This content will become publicly available on July 1, 2023

Title: Sensitivity to Hand Offsets and Related Behavior in Virtual Environments Over Time
This work explored how users’ sensitivity to offsets in their avatars’ virtual hands changes as they gain exposure to virtual reality. We conducted an experiment using a two-alternative forced choice (2-AFC) design over the course of four weeks, split into four sessions. The trials in each session had a variety of eight offset distances paired with eight offset directions (across a 2D plane). While we did not find evidence that users became more sensitive to the offsets over time, we did find evidence of behavioral changes. Specifically, participants’ head-hand coordination and completion time varied significantly as the sessions went on. We discuss the implications of both results and how they could influence our understanding of long-term calibration for perception-action coordination in virtual environments.
Authors:
; ;
Award ID(s):
1717937
Publication Date:
NSF-PAR ID:
10359245
Journal Name:
ACM Transactions on Applied Perception
ISSN:
1544-3558
Sponsoring Org:
National Science Foundation
More Like this
  1. COVID-19 has altered the landscape of teaching and learning. For those in in-service teacher education, workshops have been suspended causing programs to adapt their professional development to a virtual space to avoid indefinite postponement or cancellation. This paradigm shift in the way we conduct learning experiences creates several logistical and pedagogical challenges but also presents an important opportunity to conduct research about how learning happens in these new environments. This paper describes the approach we took to conduct research in a series of virtual workshops aimed at teaching rural elementary teachers about engineering practices and how to teach a unit from an engineering curriculum. Our work explores how engineering concepts and practices are socially constructed through interactions with teachers, students, and artifacts. This approach, called interactional ethnography has been used by the authors and others to learn about engineering teaching and learning in precollege classrooms. The approach relies on collecting data during instruction, such as video and audio recordings, interviews, and artifacts such as journal entries and photos of physical designs. Findings are triangulated by analyzing these data sources. This methodology was going to be applied in an in-person engineering education workshop for rural elementary teachers, however the pandemic forcedmore »us to conduct the workshops remotely. Teachers, working in pairs, were sent workshop supplies, and worked together during the training series that took place over Zoom over four days for four hours each session. The paper describes how we collected video and audio of teachers and the facilitators both in whole group and in breakout rooms. Class materials and submissions of photos and evaluations were managed using Google Classroom. Teachers took photos of their work and scanned written materials and submitted them all by email. Slide decks were shared by the users and their group responses were collected in real time. Workshop evaluations were collected after each meeting using Google Forms. Evaluation data suggest that the teachers were engaged by the experience, learned significantly about engineering concepts and the knowledge-producing practices of engineers, and feel confident about applying engineering activities in their classrooms. This methodology should be of interest to the membership for three distinct reasons. First, remote instruction is a reality in the near-term but will likely persist in some form. Although many of us prefer to teach in person, remote learning allows us to reach many more participants, including those living in remote and rural areas who cannot easily attend in-person sessions with engineering educators, so it benefits the field to learn how to teach effectively in this way. Second, it describes an emerging approach to engineering education research. Interactional ethnography has been applied in precollege classrooms, but this paper demonstrates how it can also be used in teacher professional development contexts. Third, based on our application of interactional ethnography to an education setting, readers will learn specifically about how to use online collaborative software and how to collect and organize data sources for research purposes.« less
  2. Virtual reality games have grown rapidly in popularity since the first consumer VR head-mounted displays were released in 2016, however comparatively little research has explored how this new medium impacts the experience of players. In this paper, we present a study exploring how user experience changes when playing Minecraft on the desktop and in immersive virtual reality. Fourteen players completed six 45 minute sessions, three played on the desktop and three in VR. The Gaming Experience Questionnaire, the i-Group presence questionnaire, and the Simulator Sickness Questionnaire were administered after each session, and players were interviewed at the end of the experiment. Participants strongly preferred playing Minecraft in VR, despite frustrations with using teleporation as a travel technique and feelings of simulator sickness. Players enjoyed using motion controls, but still continued to use indirect input under certain circumstances. This did not appear to negatively impact feelings of presence. We conclude with four lessons for game developers interested in porting their games to virtual reality.
  3. Environmental temperature is a widely used variable to describe weather and climate conditions. The use of temperature anomalies to identify variations in climate and weather systems makes temperature a key variable to evaluate not only climate variability but also shifts in ecosystem structural and functional properties. In contrast to terrestrial ecosystems, the assessment of regional temperature anomalies in coastal wetlands is more complex since the local temperature is modulated by hydrology and weather. Thus, it is unknown how the regional free-air temperature (T Free ) is coupled to local temperature anomalies, which can vary across interfaces among vegetation canopy, water, and soil that modify the wetland microclimate regime. Here, we investigated the temperature differences (offsets) at those three interfaces in mangrove-saltmarsh ecotones in coastal Louisiana and South Florida in the northern Gulf of Mexico (2017–2019). We found that the canopy offset (range: 0.2–1.6°C) between T Free and below-canopy temperature (T Canopy ) was caused by the canopy buffering effect. The similar offset values in both Louisiana and Florida underscore the role of vegetation in regulating near-ground energy fluxes. Overall, the inundation depth did not influence soil temperature (T Soil ). The interaction between frequency and duration of inundation, however, significantlymore »modulated T Soil given the presence of water on the wetland soil surface, thus attenuating any short- or long-term changes in the T Canopy and T Free . Extreme weather events—including cold fronts and tropical cyclones—induced high defoliation and weakened canopy buffering, resulting in long-term changes in canopy or soil offsets. These results highlight the need to measure simultaneously the interaction between ecological and climatic processes to reduce uncertainty when modeling macro- and microclimate in coastal areas under a changing climate, especially given the current local temperature anomalies data scarcity. This work advances the coupling of Earth system models to climate models to forecast regional and global climate change and variability along coastal areas.« less
  4. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some recordsmore »start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/.« less
  5. Redirected walking techniques use rotational gains to guide users away from physical obstacles as they walk in a virtual world, effectively creating the illusion of a larger virtual space than is physically present. Designers often want to keep users unaware of this manipulation, which is made possible by limitations in human perception that render rotational gains imperceptible below a certain threshold. Many aspects of these thresholds have been studied, however no research has yet considered whether these thresholds may change over time as users gain more experience with them. To study this, we recruited 20 novice VR users (no more than 1 hour of prior experience with an HMD) and provided them with an Oculus Quest to use for four weeks on their own time. They were tasked to complete an activity assessing their sensitivity to rotational gain once each week, in addition to whatever other activities they wanted to perform. No feedback was provided to participants about their performance during each activity, minimizing the possibility of learning effects accounting for any observed changes over time. We observed that participants became significantly more sensitive to rotation gains over time, underscoring the importance of considering prior user experience in applications involvingmore »rotational gain, as well as how prior user experience may affect other, broader applications of VR.« less