Human-scale immersive environments offer rich, often interactive, experiences and their potential has been demonstrated across areas of research, teaching, and art. The variety of these spaces and their bespoke configurations leads to a requirement for content highly-tailored to individual environments and/or interfaces requiring complicated installations. These introduce hurdles which burden users with tedious and difficult learning curves, leaving less time for project development and rapid prototyping. This project demonstrates an interactive application to control and rapid-prototype within the Collaborative-Research Augmented Immersive Virtual Environment Laboratory, or CRAIVE-Lab. Application Programming Interfaces (APIs) render complex functions of the immersive environment, such as audio spatialization, accessible via the Internet. A front-end interface configured to communicate with these APIs gives users simple and intuitive control over these functions from their personal devices (e.g. laptops, smartphones). While bespoke systems will often require bespoke solutions, this interface allows users to create content on day one, from their own devices, without set up, content-tailoring, or training. Three examples utilizing some or all of these functions are discussed.
more »
« less
Interactions in a Human-Scale Immersive Environment: the CRAIVE-Lab
We describe interfaces and visualizations in the CRAIVE (Collaborative Research Augmented Immersive Virtual Environment) Lab, an interactive human scale immersive environment at Rensselaer Polytechnic Institute. We describe the physical infrastructure and software architecture of the CRAIVE-Lab, and present two immersive scenarios within it. The first is “person following”, which allows a person walking inside the immersive space to be tracked by simple objects on the screen. This was implemented as a proof of concept of the overall system, which includes visual tracking from an overhead array of cameras, communication of the tracking results, and large-scale projection and visualization. The second “smart presentation” scenario features multimedia on the screen that reacts to the position of a person walking around the environment by playing or pausing automatically, and additionally supports real-time speech-to-text transcription. Our goal is to continue research in natural human interactions in this large environment, without requiring user-worn devices for tracking or speech recording.
more »
« less
- Award ID(s):
- 1631674
- PAR ID:
- 10026264
- Date Published:
- Journal Name:
- Cross-Surface 2016, in conjunction with the ACM International Conference on Interactive Surfaces and Spaces
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Locus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. Below we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.more » « less
-
During active shooter events or emergencies, the ability of security personnel to respond appropriately to the situation is driven by pre-existing knowledge and skills, but also depends upon their state of mind and familiarity with similar scenarios. Human behavior becomes unpredictable when it comes to making a decision in emergency situations. The cost and risk of determining these human behavior characteristics in emergency situations is very high. This paper presents an immersive collaborative virtual reality (VR) environment for performing virtual building evacuation drills and active shooter training scenarios using Oculus Rift head mounted displays. The collaborative immersive environment is implemented in Unity 3D and is based on run, hide, and fight mode for emergency response. The immersive collaborative VR environment also offers a unique method for training in emergencies for campus safety. The participant can enter the collaborative VR environment setup on the cloud and participate in the active shooter response training environment, which leads to considerable cost advantages over large-scale real-life exercises. A presence questionnaire in the user study was used to evaluate the effectiveness of our immersive training module. The results show that a majority of users agreed that their sense of presence was increased when using the immersive emergencymore » « less
-
As conversational agents and digital assistants become increasingly pervasive, understanding their synthetic speech becomes increasingly important. Simultaneously, speech synthesis is becoming more sophisticated and manipulable, providing the opportunity to optimize speech rate to save users time. However, little is known about people’s abilities to understand fast speech. In this work, we provide the first large-scale study on human listening rates. Run on LabintheWild, it used volunteer participants, was screen reader accessible, and measured listening rate by accuracy at answering questions spoken by a screen reader at various rates. Our results show that blind and low-vision people, who often rely on audio cues and access text aurally, generally have higher listening rates than sighted people. The findings also suggest a need to expand the range of rates available on personal devices. These results demonstrate the potential for users to learn to listen to faster rates, expanding the possibilities for human-conversational agent interaction.more » « less
-
24/7 continuous recording of in-home daily trajectories is informative for health status assessment (e.g., monitoring Alzheimer’s, dementia based on behavior patterns). Indoor device-free localization/tracking are ideal because no user efforts on wearing devices are needed. However, prior work mainly focused on improving the localization accuracy. They relied on well-calibrated sensor placements, which require hours of intensive manual setup and respective expertise, feasible only at small scale and by mostly re- searchers themselves. Scaling the deployments to tens or hundreds of real homes, however, would incur prohibitive manual efforts, and become infeasible for layman users. We present SCALING, a plug-and-play indoor trajectory monitoring system that layman users can easily setup by walking a one-minute loop trajectory after placing radar nodes on walls. It uses a self calibrating algorithm that estimates sensor locations through their distance measurements to the person walking the trajectory, a trivial effort without taxing layman users physically or cognitively. We evaluate SCALING via simulations and two testbeds (in lab and home configurations of sizes 3 × 6 sq m and 4.5 × 8.5 sq m). Experimental results demonstrate that SCALING outperformed the baseline using the approximate multidimensional scaling (MDS, the most relevant method in the context of self calibration) by 3.5 m/1.6 m in 80-percentile error of self calibration and tracking,respectively.Notably,only1% degradation in performance has been observed with SCALING compared to the classical multilateration with known sensor locations (anchors), which costs hours of intensive calibrating effort.more » « less
An official website of the United States government

