skip to main content

Title: Studies in spatial aural perception: establishing foundations for immersive sonification
The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to human spatial aural perception, and to validate existing relationships. Our infrastructure consists of an intuitive interaction interface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the systemメs unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before such more » an approach may be seen as a viable alternative. « less
Authors:
; ; ;
Award ID(s):
1748667
Publication Date:
NSF-PAR ID:
10108418
Journal Name:
International Conference on Auditory Display
Page Range or eLocation-ID:
28-35
Sponsoring Org:
National Science Foundation
More Like this
  1. Background Sustained engagement is essential for the success of telerehabilitation programs. However, patients’ lack of motivation and adherence could undermine these goals. To overcome this challenge, physical exercises have often been gamified. Building on the advantages of serious games, we propose a citizen science–based approach in which patients perform scientific tasks by using interactive interfaces and help advance scientific causes of their choice. This approach capitalizes on human intellect and benevolence while promoting learning. To further enhance engagement, we propose performing citizen science activities in immersive media, such as virtual reality (VR). Objective This study aims to present a novel methodology to facilitate the remote identification and classification of human movements for the automatic assessment of motor performance in telerehabilitation. The data-driven approach is presented in the context of a citizen science software dedicated to bimanual training in VR. Specifically, users interact with the interface and make contributions to an environmental citizen science project while moving both arms in concert. Methods In all, 9 healthy individuals interacted with the citizen science software by using a commercial VR gaming device. The software included a calibration phase to evaluate the users’ range of motion along the 3 anatomical planes of motion andmore »to adapt the sensitivity of the software’s response to their movements. During calibration, the time series of the users’ movements were recorded by the sensors embedded in the device. We performed principal component analysis to identify salient features of movements and then applied a bagged trees ensemble classifier to classify the movements. Results The classification achieved high performance, reaching 99.9% accuracy. Among the movements, elbow flexion was the most accurately classified movement (99.2%), and horizontal shoulder abduction to the right side of the body was the most misclassified movement (98.8%). Conclusions Coordinated bimanual movements in VR can be classified with high accuracy. Our findings lay the foundation for the development of motion analysis algorithms in VR-mediated telerehabilitation.« less
  2. While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward researchmore »towards more realistic physicality in future VR/AR.« less
  3. Recent studies have established immersive virtual environments (IVEs) as promising tools for studying human thermal states and human–building interactions. One advantage of using immersive virtual environments is that experiments or data collection can be conducted at any time of the year. However, previous studies have confirmed the potential impact of outdoor temperature variations, such as seasonal variations on human thermal sensation. To the best of our knowledge, no study has looked into the potential impact of variations in outdoor temperatures on experiments using IVE. Thus, this study aimed to determine if different outdoor temperature conditions affected the thermal states in experiments using IVEs. Experiments were conducted using a head mounted display (HMD) in a climate chamber, and the data was analyzed under three temperature ranges. A total of seventy-two people participated in the experiments conducted in two contrasting outdoor temperature conditions, i.e., cold and warm outdoor conditions. The in situ experiments conducted in two cases, i.e., cooling in warm outdoor conditions and heating in cold outdoor conditions, were used as a baseline. The baseline in-situ experiments were then compared with the IVE experiments conducted in four cases, i.e., cooling in warm and cold outdoor conditions and heating in warm andmore »cold outdoor conditions. The selection of cooling in cold outdoor conditions and heating in warm outdoor conditions for IVE experiments is particularly for studying the impact of outdoor temperature variations. Results showed that under the experimental and outdoor temperature conditions, outdoor temperature variations in most cases did not impact the results of IVE experiments, i.e., IVE experiments can replicate a temperature environment for participants compared to the ones in the in situ experiments. In addition, the participant’s thermal sensation vote was found to be a reliable indicator between IVE and in situ settings in all studied conditions. A few significantly different cases were related to thermal comfort, thermal acceptability, and overall skin temperature.« less
  4. Interest in physical therapy and individual exercises such as yoga/dance has increased alongside the well-being trend, and people globally enjoy such exercises at home/office via video streaming platforms. However, such exercises are hard to follow without expert guidance. Even if experts can help, it is almost impossible to give personalized feedback to every trainee remotely. Thus, automated pose correction systems are required more than ever, and we introduce a new captioning dataset named FixMyPose to address this need. We collect natural language descriptions of correcting a “current” pose to look like a “target” pose. To support a multilingual setup, we collect descriptions in both English and Hindi. The collected descriptions have interesting linguistic properties such as egocentric relations to the environment objects, analogous references, etc., requiring an understanding of spatial relations and commonsense knowledge about postures. Further, to avoid ML biases, we maintain a balance across characters with diverse demographics, who perform a variety of movements in several interior environments (e.g., homes, offices). From our FixMyPose dataset, we introduce two tasks: the pose-correctional-captioning task and its reverse, the target-pose-retrieval task. During the correctional-captioning task, models must generate the descriptions of how to move from the current to the target posemore »image, whereas in the retrieval task, models should select the correct target pose given the initial pose and the correctional description. We present strong cross-attention baseline models (uni/multimodal, RL, multilingual) and also show that our baselines are competitive with other models when evaluated on other image-difference datasets. We also propose new task-specific metrics (object-match, body-part-match, direction-match) and conduct human evaluation for more reliable evaluation, and we demonstrate a large human-model performance gap suggesting room for promising future work. Finally, to verify the sim-to-real transfer of our FixMyPose dataset, we collect a set of real images and show promising performance on these images. Data and code are available: https://fixmypose-unc.github.io.« less
  5. Exoskeleton as a human augmentation technology has shown a great potential for transforming the future civil engineering operations. However, the inappropriate use of exoskeleton could cause injuries and damages if the user is not well trained. An effective procedural and operational training will make users more aware of the capabilities, restrictions and risks associated with exoskeleton in civil engineering operations. At present, the low availability and high cost of exoskeleton systems make hands-on training less feasible. In addition, different designs of exoskeleton correspond with different activation procedures, muscular engagement and motion boundaries, posing further challenges to exoskeleton training. We propose an “sensation transfer” approach that migrates the physical experience of wearing a real exoskeleton system to first-time users via a passive haptic system in an immersive virtual environment. The body motion and muscular engagement data of 15 experienced exoskeleton users were recorded and replayed in a virtual reality environment. Then a set of haptic devices on key parts of the body (shoulders, elbows, hands, and waist) generate different patterns of haptic cues depending on the trainees’ accuracy of mimicking the actions. The sensation transfer method will enhance the haptic learning experience and therefore accelerate the training.