skip to main content


Title: People, Projects, Organizations, and Products: Designing a Knowledge Graph to Support Multi-Stakeholder Environmental Planning and Design
As the need for more broad-scale solutions to environmental problems is increasingly recognized, traditional hierarchical, government-led models of coordination are being supplemented by or transformed into more collaborative inter-organizational networks (i.e., collaboratives, coalitions, partnerships). As diffuse networks, such regional environmental planning and design (REPD) efforts often face challenges in sharing and using spatial and other types of information. Recent advances in semantic knowledge management technologies, such as knowledge graphs, have the potential to address these challenges. In this paper, we first describe the information needs of three multi-stakeholder REPD initiatives in the western USA using a list of 80 need-to-know questions and concerns. The top needs expressed were for help in tracking the participants, institutions, and information products relevant to the REDP’s focus. To address these needs, we developed a prototype knowledge graph based on RDF and GeoSPARQL standards. This semantic approach provided a more flexible data structure than traditional relational databases and also functionality to query information across different providers; however, the lack of semantic data expertise, the complexity of existing software solutions, and limited online hosting options are significant barriers to adoption. These same barriers are more acute for geospatial data, which also faces the added challenge of maintaining and synchronizing both semantic and traditional geospatial datastores.  more » « less
Award ID(s):
1737573
NSF-PAR ID:
10312077
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ISPRS International Journal of Geo-Information
Volume:
10
Issue:
12
ISSN:
2220-9964
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Initiated by the University Consortium of Geographic Information Science (UCGIS), the GIS&T Body of Knowledge (BoK) is a community‐driven endeavor to define, develop, and document geospatial topics related to geographic information science and technologies (GIS&T). In recent years, GIS&T BoK has undergone rigorous development in terms of its topic re‐organization and content updating, resulting in a new digital version of the project. While the BoK topics provide useful materials for researchers and students to learn about GIS, the semantic relationships among the topics, such as semantic similarity, should also be identified so that a better and automated topic navigation can be achieved. Currently, the related topics are either defined manually by editors or authors, which may result in an incomplete assessment of topic relationships. To address this challenge, our research evaluates the effectiveness of multiple natural language processing (NLP) techniques in extracting semantics from text, including both deep neural networks and traditional machine learning approaches. Besides, a novel text summarization—KACERS (Keyword‐Aware Cross‐Encoder‐Ranking Summarizer)—is proposed to generate a semantic summary of scientific publications. By identifying the semantic linkages among key topics, this work guides the future development and content organization of the GIS&T BoK project. It also offers a new perspective on the use of machine learning techniques for analyzing scientific publications and demonstrates the potential of the KACERS summarizer in semantic understanding of long text documents.

     
    more » « less
  2. An abundance of biomedical data is generated in the form of clinical notes, reports, and research articles available online. This data holds valuable information that requires extraction, retrieval, and transformation into actionable knowledge. However, this information has various access challenges due to the need for precise machine-interpretable semantic metadata required by search engines. Despite search engines' efforts to interpret the semantics information, they still struggle to index, search, and retrieve relevant information accurately. To address these challenges, we propose a novel graph-based semantic knowledge-sharing approach to enhance the quality of biomedical semantic annotation by engaging biomedical domain experts. In this approach, entities in the knowledge-sharing environment are interlinked and play critical roles. Authorial queries can be posted on the "Knowledge Cafe," and community experts can provide recommendations for semantic annotations. The community can further validate and evaluate the expert responses through a voting scheme resulting in a transformed "Knowledge Cafe" environment that functions as a knowledge graph with semantically linked entities. We evaluated the proposed approach through a series of scenarios, resulting in precision, recall, F1-score, and accuracy assessment matrices. Our results showed an acceptable level of accuracy at approximately 90%. The source code for "Semantically" is freely available at: https://github.com/bukharilab/Semantically 
    more » « less
  3. An abundance of biomedical data is generated in the form of clinical notes, reports, and research articles available online. This data holds valuable information that requires extraction, retrieval, and transformation into actionable knowledge. However, this information has various access challenges due to the need for precise machine-interpretable semantic metadata required by search engines. Despite search engines' efforts to interpret the semantics information, they still struggle to index, search, and retrieve relevant information accurately. To address these challenges, we propose a novel graph-based semantic knowledge-sharing approach to enhance the quality of biomedical semantic annotation by engaging biomedical domain experts. In this approach, entities in the knowledge-sharing environment are interlinked and play critical roles. Authorial queries can be posted on the "Knowledge Cafe," and community experts can provide recommendations for semantic annotations. The community can further validate and evaluate the expert responses through a voting scheme resulting in a transformed "Knowledge Cafe" environment that functions as a knowledge graph with semantically linked entities. We evaluated the proposed approach through a series of scenarios, resulting in precision, recall, F1-score, and accuracy assessment matrices. Our results showed an acceptable level of accuracy at approximately 90%. The source code for "Semantically" is freely available at: https://github.com/bukharilab/Semantically 
    more » « less
  4. Access to geospatial knowledge in higher education requires broad inclusion of spatial concepts in courses across multiple disciplines. Geospatial competency is required to meet the needs of a rapidly globalized world and is a vital component of modern science education. Geospatial education provides students with proficiency interpreting quantitative and qualitative information and exposes students to technical concepts such as spatial analytics and data management. Despite these numerous benefits, incorporating geospatial concepts and hands on geographic information systems (GIS) experiences within course curriculum can be a challenge for educators. 
    more » « less
  5. Spatial resolution is critical for observing and monitoring environmental phenomena. Acquiring high-resolution bathymetry data directly from satellites is not always feasible due to limitations on equipment, so spatial data scientists and researchers turn to single image super-resolution (SISR) methods that utilize deep learning techniques as an alternative method to increase pixel density. While super resolution residual networks (e.g., SR-ResNet) are promising for this purpose, several challenges still need to be addressed: (1) Earth data such as bathymetry is expensive to obtain and relatively limited in its data record amount; (2) certain domain knowledge needs to be complied with during model training; (3) certain areas of interest require more accurate measurements than other areas. To address these challenges, following the transfer learning principle, we study how to leverage an existing pre-trained super-resolution deep learning model, namely SR-ResNet, for high-resolution bathymetry data generation. We further enhance the SR-ResNet model to add corresponding loss functions based on domain knowledge. To let the model perform better for certain spatial areas, we add additional loss functions to increase the penalty of the areas of interest. Our experiments show our approaches achieve higher accuracy than most baseline models when evaluating using metrics including MSE, PSNR, and SSIM. 
    more » « less