skip to main content


Search for: All records

Award ID contains: 1840131

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. As humans, we can modify our assumptions about a scene by imagining alternative objects or concepts in our minds. For example, we can easily anticipate the implications of the sun being overcast by rain clouds (e.g., the street will get wet) and accordingly prepare for that. In this paper, we introduce a new task/dataset called Commonsense Reasoning for Counterfactual Scene Imagination (COSIM) which is designed to evaluate the ability of AI systems to reason about scene change imagination. In this task/dataset, models are given an image and an initial question-response pair about the image. Next, a counterfactual imagined scene change (in textual form) is applied, and the model has to predict the new response to the initial question based on this scene change. We collect 3.5K high-quality and challenging data instances, with each instance consisting of an image, a commonsense question with a response, a description of a counterfactual change, a new response to the question, and three distractor responses. Our dataset contains various complex scene change types (such as object addition/removal/state change, event description, environment change, etc.) that require models to imagine many different scenarios and reason about the changed scenes. We present a baseline model based on a vision-language Transformer (i.e., LXMERT) and ablation studies. Through human evaluation, we demonstrate a large human-model performance gap, suggesting room for promising future work on this challenging counterfactual, scene imagination task. 
    more » « less
  2. Demand for image editing has been increasing as users' desire for expression is also increasing. However, for most users, image editing tools are not easy to use since the tools require certain expertise in photo effects and have complex interfaces. Hence, users might need someone to help edit their images, but having a personal dedicated human assistant for every user is impossible to scale. For that reason, an automated assistant system for image editing is desirable. Additionally, users want more image sources for diverse image editing works, and integrating an image search functionality into the editing tool is a potential remedy for this demand. Thus, we propose a dataset of an automated Conversational Agent for Image Search and Editing (CAISE). To our knowledge, this is the first dataset that provides conversational image search and editing annotations, where the agent holds a grounded conversation with users and helps them to search and edit images according to their requests. To build such a system, we first collect image search and editing conversations between pairs of annotators. The assistant-annotators are equipped with a customized image search and editing tool to address the requests from the user-annotators. The functions that the assistant-annotators conduct with the tool are recorded as executable commands, allowing the trained system to be useful for real-world application execution. We also introduce a generator-extractor baseline model for this task, which can adaptively select the source of the next token (i.e., from the vocabulary or from textual/visual contexts) for the executable command. This serves as a strong starting point while still leaving a large human-machine performance gap for useful future work.conversational image search and editing annotations, where the agent holds a grounded conversation with users and helps them to search and edit images according to their requests. To build such a system, we first collect image search and editing conversations between pairs of annotators. The assistant-annotators are equipped with a customized image search and editing tool to address the requests from the user-annotators. The functions that the assistant-annotators conduct with the tool are recorded as executable commands, allowing the trained system to be useful for real-world application execution. We also introduce a generator-extractor baseline model for this task, which can adaptively select the source of the next token (i.e., from the vocabulary or from textual/visual contexts) for the executable command. This serves as a strong starting point while still leaving a large human-machine performance gap for useful future work. 
    more » « less
  3. Fashion attribute editing aims to manipulate fashion im- ages based on a user-specified attribute, while preserving the details of the original image as intact as possible. Re- cent works in this domain have mainly focused on direct manipulation of the raw RGB pixels, which only allows to perform edits involving relatively small shape changes (e.g., sleeves). The goal of our Virtual Personal Tailoring Network (VPTNet) is to extend the editing capabilities to much larger shape changes of fashion items, such as cloth length. To achieve this goal, we decouple the fashion at- tribute editing task into two conditional stages: shape-then- appearance editing. To this aim, we propose a shape editing network that employs a semantic parsing of the fashion im- age as an interface for manipulation. Compared to operat- ing on the raw RGB image, our parsing map editing enables performing more complex shape editing operations. Sec- ond, we introduce an appearance completion network that takes the previous stage results and completes the shape dif- ference regions to produce the final RGB image. Qualitative and quantitative experiments on the DeepFashion-Synthesis dataset confirm that VPTNet outperforms state-of-the-art methods for both small and large shape attribute editing. 
    more » « less
  4. In this paper, we aim at synthesizing a free-viewpoint video of an arbitrary human performance using sparse multi-view cameras. Recently, several works have addressed this problem by learning person-specific neural radiance fields (NeRF) to capture the appearance of a particular human. In parallel, some work proposed to use pixel-aligned features to generalize radiance fields to arbitrary new scenes and objects. Adopting such generalization approaches to humans, however, is highly challenging due to the heavy occlusions and dynamic articulations of body parts. To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture. Specifically, we first introduce a temporal transformer that aggregates tracked visual features based on the skeletal body motion over time. Moreover, a multi-view transformer is proposed to perform cross-attention between the temporally-fused features and the pixel-aligned features at each time step to integrate observations on the fly from multiple views. Experiments on the ZJU-MoCap and AIST datasets show that our method significantly outperforms recent generalizable NeRF methods on unseen identities and poses. The video results and code are available at https://youngjoongunc.github.io/nhp. 
    more » « less
  5. We present a new approach, EgoGlass, towards egocentric motion-capture and human pose estimation. EgoGlass is a lightweight eyeglass frame with two cameras mounted on it. Our first contribution is a new egocentric motion-capture device that adds next to no extra burden on the user and a dataset of real people doing a diverse set of actions captured by EgoGlass. Second, we propose to utilize body part information for human pose detection - to help tackle the problems of limited body coverage and self-occlusions caused by the egocentric viewpoint and cameras’ proximity to the human body. We also propose a concept of pseudo-limb mask as an alternative for segmentation mask when ground truth segmentation mask is absent for egocentric images with real subject. We demonstrate that our method achieves better results than the counterpart method without body part information on our dataset. We also test our method on two existing egocentric datasets: xR-EgoPose and EgoCap. Our method achieves state-of-the-art results on xR-EgoPose and is on par with existing method for EgoCap without requiring temporal information or personalization for each individual user. 
    more » « less
  6. null (Ed.)
    Communication between human and mobile agents is getting increasingly important as such agents are widely deployed in our daily lives. Vision-and-Dialogue Navigation is one of the tasks that evaluate the agent’s ability to interact with humans for assistance and navigate based on natural language responses. In this paper, we explore the Navigation from Dialogue History (NDH) task, which is based on the Cooperative Vision-and-Dialogue Navigation (CVDN) dataset, and present a state-of-the-art model which is built upon Vision-Language transformers. However, despite achieving competitive performance, we find that the agent in the NDH task is not evaluated appropriately by the primary metric – Goal Progress. By analyzing the performance mismatch between Goal Progress and other metrics (e.g., normalized Dynamic Time Warping) from our state-of-the-art model, we show that NDH’s sub-path based task setup (i.e., navigating partial trajectory based on its correspondent subset of the full dialogue) does not provide the agent with enough supervision signal towards the goal region. Therefore, we propose a new task setup called NDH-Full which takes the full dialogue and the whole navigation path as one instance. We present a strong baseline model and show initial results on this new task. We further describe several approaches that we try, in order to improve the model performance (based on curriculum learning, pre-training, and data-augmentation), suggesting potential useful training methods on this new NDH-Full task. 
    more » « less
  7. null (Ed.)
  8. null (Ed.)
    Breathing biomarkers, such as breathing rate, fractional inspiratory time, and inhalation-exhalation ratio, are vital for monitoring the user's health and well-being. Accurate estimation of such biomarkers requires breathing phase detection, i.e., inhalation and exhalation. However, traditional breathing phase monitoring relies on uncomfortable equipment, e.g., chestbands. Smartphone acoustic sensors have shown promising results for passive breathing monitoring during sleep or guided breathing. However, detecting breathing phases using acoustic data can be challenging for various reasons. One of the major obstacles is the complexity of annotating breathing sounds due to inaudible parts in regular breathing and background noises. This paper assesses the potential of using smartphone acoustic sensors for passive unguided breathing phase monitoring in a natural environment. We address the annotation challenges by developing a novel variant of the teacher-student training method for transferring knowledge from an inertial sensor to an acoustic sensor, eliminating the need for manual breathing sound annotation by fusing signal processing with deep learning techniques. We train and evaluate our model on the breathing data collected from 131 subjects, including healthy individuals and respiratory patients. Experimental results show that our model can detect breathing phases with 77.33% accuracy using acoustic sensors. We further present an example use-case of breathing phase-detection by first estimating the biomarkers from the estimated breathing phases and then using these biomarkers for pulmonary patient detection. Using the detected breathing phases, we can estimate fractional inspiratory time with 92.08% accuracy, the inhalation-exhalation ratio with 86.76% accuracy, and the breathing rate with 91.74% accuracy. Moreover, we can distinguish respiratory patients from healthy individuals with up to 76% accuracy. This paper is the first to show the feasibility of detecting regular breathing phases towards passively monitoring respiratory health and well-being using acoustic data captured by a smartphone. 
    more » « less
  9. null (Ed.)
    In this paper, we demonstrate the Information Interactions in Virtual Reality (IIVR) system designed and implemented to study how users interact with abstract information objects in immersive virtual environments in the context of information retrieval. Virtual reality displays are quickly growing as social and personal computing media, and understanding user interactions in these immersive environments is imperative. As a step towards effective information retrieval in such emerging platforms, our system is central to upcoming studies to observe how users engage in information triaging tasks in Virtual Reality (VR). In these studies, we will observe the effects of (1) information layouts and (2) types of interactions in VR. We believe this early system motivates researchers in understanding and designing meaningful interactions for future VR information retrieval applications. 
    more » « less