skip to main content


Title: Evaluating the Impact of Data Representation on EHR-Based Analytic Tasks
Different analytic techniques operate optimally with different types of data. As the use of EHR-based analytics expands to newer tasks, data will have to be transformed into different representations, so the tasks can be optimally solved. We classified representations into broad categories based on their characteristics and proposed a new knowledge-driven representation for clinical data mining as well as trajectory mining, called Severity Encoding Variables (SEVs). Additionally, we studied which characteristics make representations most suitable for particular clinical analytics tasks including trajectory mining. Our evaluation shows that, for regression, most data representations performed similarly, with SEV achieving a slight (albeit statistically significant) advantage. For patients at high risk of diabetes, it outperformed the competing representation by (relative) 20%. For association mining, SEV achieved the highest performance. Its ability to constrain the search space of patterns through clinical knowledge was key to its success.  more » « less
Award ID(s):
1602394
NSF-PAR ID:
10168833
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Medinfo 2019
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: 1) predicting types of POIs given their positions and context, 2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec's multi-scale representation can handle distributions at different scales. 
    more » « less
  2. Network embedding has become the cornerstone of a variety of mining tasks, such as classification, link prediction, clustering, anomaly detection and many more, thanks to its superior ability to encode the intrinsic network characteristics in a compact low-dimensional space. Most of the existing methods focus on a single network and/or a single resolution, which generate embeddings of different network objects (node/subgraph/network) from different networks separately. A fundamental limitation with such methods is that the intrinsic relationship across different networks (e.g., two networks share same or similar subgraphs) and that across different resolutions (e.g., the node-subgraph membership) are ignored, resulting in disparate embeddings. Consequentially, it leads to sub-optimal performance or even becomes inapplicable for some downstream mining tasks (e.g., role classification, network alignment. etc.). In this paper, we propose a unified framework MrMine to learn the representations of objects from multiple networks at three complementary resolutions (i.e., network, subgraph and node) simultaneously. The key idea is to construct the cross-resolution cross-network context for each object. The proposed method bears two distinctive features. First, it enables and/or boosts various multi-network downstream mining tasks by having embeddings at different resolutions from different networks in the same embedding space. Second, Our method is efficient and scalable, with a O(nlog(n)) time complexity for the base algorithm and a linear time complexity w.r.t. the number of nodes and edges of input networks for the accelerated version. Extensive experiments on real-world data show that our methods (1) are able to enable and enhance a variety of multi-network mining tasks, and (2) scale up to million-node networks. 
    more » « less
  3. Treating disease according to precision health requires the individualization of therapeutic solutions as a cardinal step that is part of a process that typically depends on multiple factors. The starting point is the collection and assembly of data over time to assess the patient’s health status and monitor response to therapy. Radiomics is a very important component of this process. Its main goal is implementing a protocol to quantify the image informative contents by first mining and then extracting the most representative features. Further analysis aims to detect potential disease phenotypes through signs and marks of heterogeneity. As multimodal images hinge on various data sources, and these can be integrated with treatment plans and follow-up information, radiomics is naturally centered on dynamically monitoring disease progression and/or the health trajectory of patients. However, radiomics creates critical needs too. A concise list includes: (a) successful harmonization of intra/inter-modality radiomic measurements to facilitate the association with other data domains (genetic, clinical, lifestyle aspects, etc.); (b) ability of data science to revise model strategies and analytics tools to tackle multiple data types and structures (electronic medical records, personal histories, hospitalization data, genomic from various specimens, imaging, etc.) and to offer data-agnostic solutions for patient outcomes prediction; (c) and model validation with independent datasets to ensure generalization of results, clinical value of new risk stratifications, and support to clinical decisions for highly individualized patient management. 
    more » « less
  4. null (Ed.)
    Network embedding has demonstrated effective empirical performance for various network mining tasks such as node classification, link prediction, clustering, and anomaly detection. However, most of these algorithms focus on the single-view network scenario. From a real-world perspective, one individual node can have different connectivity patterns in different networks. For example, one user can have different relationships on Twitter, Facebook, and LinkedIn due to varying user behaviors on different platforms. In this case, jointly considering the structural information from multiple platforms (i.e., multiple views) can potentially lead to more comprehensive node representations, and eliminate noises and bias from a single view. In this paper, we propose a view-adversarial framework to generate comprehensive and robust multi-view network representations named VANE, which is based on two adversarial games. The first adversarial game enhances the comprehensiveness of the node representation by discriminating the view information which is obtained from the subgraph induced by neighbors of that node. The second adversarial game improves the robustness of the node representation with the challenging of fake node representations from the generative adversarial net. We conduct extensive experiments on downstream tasks with real-world multi-view networks, which shows that our proposed VANE framework significantly outperforms other baseline methods. 
    more » « less
  5. Learning from multiple representations (MRs) is not an easy task for most people, despite how easy it is for experts. Different combinations of representations (e.g., text + photograph, graph + formula, map + diagram) pose different challenges for learners, but across the literature researchers find these to be challenging learning tasks. Each representation typically includes some unique information, as well as some information shared with the other representation(s). Finding one piece of information is only somewhat challenging, but linking information across representations and especially making inferences are very challenging and important parts of using multiple representations for learning. Coordination of multiple representations skills are rarely taught in classrooms, despite the fact that learners are frequently tested on them. Learning from MRs depends on the specific learning tasks posed, learner characteristics, the specifics of which representation(s) are used, and the design of each representation. These various factors act separately and in combination (which can be compensatory, additive, or interactive). Learning tasks can be differentially effective depending on learner characteristics, especially prior knowledge, self-regulation, and age/grade. Learning tasks should be designed keeping this differential effectiveness in mind, and researchers should test for such interactions. 
    more » « less