skip to main content

Title: Spatial AMR: Expanded Spatial Annotation in the Context of a Grounded Minecraft Corpus
This paper presents an expansion to the Abstract Meaning Representation (AMR) annotation schema that captures fine-grained semantically and pragmatically derived spatial information in grounded corpora. We describe a new lexical category conceptualization and set of spatial annotation tools built in the context of a multimodal corpus consisting of 185 3D structure-building dialogues between a human architect and human builder in Minecraft. Minecraft provides a particularly beneficial spatial relation-elicitation environment because it automatically tracks locations and orientations of objects and avatars in the space according to an absolute Cartesian coordinate system. Through a two-step process of sentence-level and document-level annotation designed to capture implicit information, we leverage these coordinates and bearings in the AMRs in combination with spatial framework annotation to ground the spatial language in the dialogues to absolute space.
Authors:
; ; ;
Award ID(s):
1764048
Publication Date:
NSF-PAR ID:
10179909
Journal Name:
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020),
Page Range or eLocation-ID:
4883–4892
Sponsoring Org:
National Science Foundation
More Like this
  1. Our goal is to develop and deploy a virtual assistant health coach that can help patients set realistic physical activity goals and live a more active lifestyle. Since there is no publicly shared dataset of health coaching dialogues, the first phase of our research focused on data collection. We hired a certified health coach and 28 patients to collect the first round of human-human health coaching interaction which took place via text messages. This resulted in 2853 messages. The data collection phase was followed by conversation analysis to gain insight into the way information exchange takes place between a health coach and a patient. This was formalized using two annotation schemas: one that focuses on the goals the patient is setting and another that models the higher-level structure of the interactions. In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients. Given the resource-intensive nature of data annotation, successfully annotating a new dataset automatically is key to answer the need for high quality, large datasets.
  2. Event extraction has long been treated as a sentence-level task in the IE community. We argue that this setting does not match human information seeking behavior and leads to incomplete and uninformative extraction results. We propose a document-level neural event argument extraction model by formulating the task as conditional generation following event templates. We also compile a new document-level event extraction benchmark dataset WIKIEVENTS which includes complete event and coreference annotation. On the task of argument extraction, we achieve an absolute gain of 7.6% F1 and 5.7% F1 over the next best model on the RAMS and WIKIEVENTS datasets respectively. On the more challenging task of informative argument extraction, which requires implicit coreference reasoning, we achieve a 9.3% F1 gain over the best baseline. To demonstrate the portability of our model, we also create the first end-to-end zero-shot event extraction framework and achieve 97% of fully supervised model’s trigger extraction performance and 82% of the argument extraction performance given only access to 10 out of the 33 types on ACE.
  3. The movement of animals is strongly influenced by external factors in their surrounding environment such as weather, habitat types, and human land use. With advances in positioning and sensor technologies, it is now possible to capture animal locations at high spatial and temporal granularities. Likewise, modern space-based remote sensing technology provides us with an increasing access to large volumes of environmental data, some of which changes on an hourly basis. Environmental data are heterogeneous in source and format, and are usually obtained at different scales and granularities than movement data. Indeed, there remain scientific and technical challenges in developing linkages between the growing collections of animal movement data and the large repositories of heterogeneous remote sensing observations, as well as in the developments of new statistical and computational methods for the analysis of movement in its environmental context. These challenges include retrieval, indexing, efficient storage, data integration, and analytic techniques. We have developed a new system - the Environmental-Data Automated Track Annotation (Env-DATA) - that automates annotation of movement trajectories with remote-sensing environmental information, including high resolution topography, weather from global and regional reanalysis datasets, climatology, human geography, ocean currents and productivity, land use, vegetation and land surface variables, precipitation,more »fire, and other global datasets. The system automates the acquisition of data from open web resources of remote sensing and weather data and provides several interpolation methods from the native grid resolution and structure to a global regular grid linked with the movement tracks in space and time. Env-DATA provides an easy-to-use platform for end users that eliminates technical difficulties of the annotation processes, including data acquisition, data transformation and integration, resampling, interpolation and interpretation. The new Env-DATA system enhances Movebank (www.movebank.org), an open portal of animal tracking data. The aim is to facilitate new understanding and predictive capabilities of spatiotemporal patterns of animal movement in response to dynamic and changing environments from local to global scales. The system is already in use by scientists worldwide, and by several conservation managers, such as the consortium of federal and private institution that manage the endangered Californian Condor populations.« less
  4. Abstract The quantitative estimation of precipitation from orbiting passive microwave imagers has been performed for more than 30 years. The development of retrieval methods consists of establishing physical or statistical relationships between the brightness temperatures (TBs) measured at frequencies between 5 and 200 GHz and precipitation. Until now, these relationships have essentially been established at the “pixel” level, associating the average precipitation rate inside a predefined area (the pixel) to the collocated multispectral radiometric measurement. This approach considers each pixel as an independent realization of a process and ignores the fact that precipitation is a dynamic variable with rich multiscale spatial and temporal organization. Here we propose to look beyond the pixel values of the TBs and show that useful information for precipitation retrieval can be derived from the variations of the observed TBs in a spatial neighborhood around the pixel of interest. We also show that considering neighboring information allows us to better handle the complex observation geometry of conical-scanning microwave imagers, involving frequency-dependent beamwidths, overlapping fields of view, and large Earth incidence angles. Using spatial convolution filters, we compute “nonlocal” radiometric parameters sensitive to spatial patterns and scale-dependent structures of the TB fields, which are the “geometric signatures”more »of specific precipitation structures such as convective cells. We demonstrate that using nonlocal radiometric parameters to enrich the spectral information associated to each pixel allows for reduced retrieval uncertainty (reduction of 6%–11% of the mean absolute retrieval error) in a simple k-nearest neighbors retrieval scheme.« less
  5. Spatial reasoning is an important skillset that is malleable to training interventions. One possible context for intervention is the popular video game Minecraft. Minecraft encourages users to engage in spatial manipulation of 3D objects. However, few papers have chronicled any in-game practices that might evidence spatial reasoning, or how we might study its development through the game. In this paper, we report on 11 middle school students’ spatial reasoning practices while playing Minecraft. We use audio and video data of student gameplay to delineate five in-game practices that align with spatial reasoning. We expand on a student case study, to explicate these practices. The identified practices may be beneficial for studying spatial reasoning development in game-based environments and contribute to a growing body of research on ways games support development of important and transferable skills.