skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Visualsickness: A web application to record and organize cybersickness data
Organizing cybersickness data using a paper simulator sickness questionnaire (SSQ) is challenging for researchers. We developed a web application to make it easier to collect, store, organize, and report SSQ data. Using this, researchers can create studies, multiple sessions within a study, and SSQs at multiple time intervals within a session. In addition, we extended on SSQ by introducing a visual SSQ with emoji animations representing the SSQ’s symptoms.  more » « less
Award ID(s):
2104819
PAR ID:
10464786
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
Page Range / eLocation ID:
481 to 484
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Access to high-quality data is an important barrier in the digital analysis of urban settings, including applications within computer vision and urban design. Diverse forms of data collected from sensors in areas of high activity in the urban environment, particularly at street intersections, are valuable resources for researchers interpreting the dynamics between vehicles, pedestrians, and the built environment. In this paper, we present a high-resolution audio, video, and LiDAR dataset of three urban intersections in Brooklyn, New York, totaling almost 8 unique hours. The data were collected with custom Reconfigurable Environmental Intelligence Platform (REIP) sensors that were designed with the ability to accurately synchronize multiple video and audio inputs. The resulting data are novel in that they are inclusively multimodal, multi-angular, high-resolution, and synchronized. We demonstrate four ways the data could be utilized — (1) to discover and locate occluded objects using multiple sensors and modalities, (2) to associate audio events with their respective visual representations using both video and audio modes, (3) to track the amount of each type of object in a scene over time, and (4) to measure pedestrian speed using multiple synchronized camera views. In addition to these use cases, our data are available for other researchers to carry out analyses related to applying machine learning to understanding the urban environment (in which existing datasets may be inadequate), such as pedestrian-vehicle interaction modeling and pedestrian attribute recognition. Such analyses can help inform decisions made in the context of urban sensing and smart cities, including accessibility-aware urban design and Vision Zero initiatives. 
    more » « less
  2. In recent years, there has been a growing interest in profiling multiomic modalities within individual cells simultaneously. One such example is integrating combined single-cell RNA sequencing (scRNA-seq) data and single-cell transposase-accessible chromatin sequencing (scATAC-seq) data. Integrated analysis of diverse modalities has helped researchers make more accurate predictions and gain a more comprehensive understanding than with single-modality analysis. However, generating such multimodal data is technically challenging and expensive, leading to limited availability of single-cell co-assay data. Here, we propose a model for cross-modal prediction between the transcriptome and chromatin profiles in single cells. Our model is based on a deep neural network architecture that learns the latent representations from the source modality and then predicts the target modality. It demonstrates reliable performance in accurately translating between these modalities across multiple paired human scATAC-seq and scRNA-seq datasets. Additionally, we developed CrossMP, a web-based portal allowing researchers to upload their single-cell modality data through an interactive web interface and predict the other type of modality data, using high-performance computing resources plugged at the backend. 
    more » « less
  3. Decades of research confirm that interpretation and environmental education on public lands can accomplish a wide variety of positive outcomes for participants, ranging from personal learning and growth to stewardship behaviors both onand off-site. This research note offers a brief summary of the state-of-the-field of interpretation and environmental education research as applied to public lands. It highlights the general state of knowledge and identifies opportunities for researchers to further enhance our understanding about education on public lands to maximize benefits for visitors and managers alike. In particular, we emphasize the value of large-scale comparative studies as well as collaborative approaches to adaptive management, in which researchers support active experimentation through iterative data collection and analysis within a learning network of multiple program providers. This latter approach promotes evidenced-based learning within a larger community practice in which participants can benefit from the diverse knowledge, experiences, and data that each brings into the network. 
    more » « less
  4. There is a growing body of research revealing that longitudinal passive sensing data from smartphones and wearable devices can capture daily behavior signals for human behavior modeling, such as depression detection. Most prior studies build and evaluate machine learning models using data collected from a single population. However, to ensure that a behavior model can work for a larger group of users, its generalizability needs to be verified on multiple datasets from different populations. We present the first work evaluating cross-dataset generalizability of longitudinal behavior models, using depression detection as an application. We collect multiple longitudinal passive mobile sensing datasets with over 500 users from two institutes over a two-year span, leading to four institute-year datasets. Using the datasets, we closely re-implement and evaluated nine prior depression detection algorithms. Our experiment reveals the lack of model generalizability of these methods. We also implement eight recently popular domain generalization algorithms from the machine learning community. Our results indicate that these methods also do not generalize well on our datasets, with barely any advantage over the naive baseline of guessing the majority. We then present two new algorithms with better generalizability. Our new algorithm, Reorder, significantly and consistently outperforms existing methods on most cross-dataset generalization setups. However, the overall advantage is incremental and still has great room for improvement. Our analysis reveals that the individual differences (both within and between populations) may play the most important role in the cross-dataset generalization challenge. Finally, we provide an open-source benchmark platform GLOBEM- short for Generalization of Longitudinal BEhavior Modeling - to consolidate all 19 algorithms. GLOBEM can support researchers in using, developing, and evaluating different longitudinal behavior modeling methods. We call for researchers' attention to model generalizability evaluation for future longitudinal human behavior modeling studies. 
    more » « less
  5. Collecting massive amounts of image data is a common way to record the post-event condition of buildings, to be used by engineers and researchers to learn from that event. Key information needed to interpret the image data collected during these reconnaissance missions is the location within the building where each image was taken. However, image localization is difficult in an indoor environment, as GPS is not generally available because of weak or broken signals. To support rapid, seamless data collection during a reconnaissance mission, we develop and validate a fully automated technique to provide robust indoor localization while requiring no prior information about the condition or spatial layout of an indoor environment. The technique is meant for large-scale data collection across multiple floors within multiple buildings. A systematic method is designed to separate the reconnaissance data into individual buildings and individual floors. Then, for data within each floor, an optimization problem is formulated to automatically overlay the path onto the structural drawings providing robust results, and subsequently, yielding the image locations. The end-to end technique only requires the data collector to wear an additional inexpensive motion camera, thus, it does not add time or effort to the current rapid reconnaissance protocol. As no prior information about the condition or spatial layout of the indoor environment is needed, this technique can be adapted to a large variety of building environments and does not require any type of preparation in the postevent settings. This technique is validated using data collected from several real buildings. 
    more » « less