skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: End-to-End Evidential-Efficient Net for Radiomics Analysis of Brain MRI to Predict Oncogene Expression and Overall Survival
We presented a novel radiomics approach using multimodality MRI to predict the expression of an oncogene (O6-Methylguanine-DNA methyltransferase, MGMT) and overall survival (OS) of glioblastoma (GBM) patients. Specifically, we employed an EffNetV2-T, which was down scaled and modified from EfficientNetV2, as the feature extractor. Besides, we used evidential layers based to control the distribution of prediction outputs. The evidential layers help to classify the high-dimensional radiomics features to predict the methylation status of MGMT and OS. Tests showed that our model achieved an accuracy of 0.844, making it possible to use as a clinic-enabling technique in the diagnosing and management of GBM. Comparison results indicated that our method performed better than existing work.  more » « less
Award ID(s):
2115095
NSF-PAR ID:
10496627
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Wang, Linwei; Dou, Qi; Fletcher, P. Thomas; Speidel, Stefanie; Li, Shuo
Publisher / Repository:
Springer Nature Switzerland
Date Published:
Journal Name:
Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background Glioblastoma Multiforme (GBM) is a fast-growing and highly aggressive brain tumor that invades the nearby brain tissue and presents secondary nodular lesions across the whole brain but generally does not spread to distant organs. Without treatment, GBM can result in death in about 6 months. The challenges are known to depend on multiple factors: brain localization, resistance to conventional therapy, disrupted tumor blood supply inhibiting effective drug delivery, complications from peritumoral edema, intracranial hypertension, seizures, and neurotoxicity. Main text Imaging techniques are routinely used to obtain accurate detections of lesions that localize brain tumors. Especially magnetic resonance imaging (MRI) delivers multimodal images both before and after the administration of contrast, which results in displaying enhancement and describing physiological features as hemodynamic processes. This review considers one possible extension of the use of radiomics in GBM studies, one that recalibrates the analysis of targeted segmentations to the whole organ scale. After identifying critical areas of research, the focus is on illustrating the potential utility of an integrated approach with multimodal imaging, radiomic data processing and brain atlases as the main components. The templates associated with the outcome of straightforward analyses represent promising inference tools able to spatio-temporally inform on the GBM evolution while being generalizable also to other cancers. Conclusions The focus on novel inference strategies applicable to complex cancer systems and based on building radiomic models from multimodal imaging data can be well supported by machine learning and other computational tools potentially able to translate suitably processed information into more accurate patient stratifications and evaluations of treatment efficacy. Graphical Abstract 
    more » « less
  2. Abstract Non-small-cell lung cancer (NSCLC) represents approximately 80–85% of lung cancer diagnoses and is the leading cause of cancer-related death worldwide. Recent studies indicate that image-based radiomics features from positron emission tomography/computed tomography (PET/CT) images have predictive power for NSCLC outcomes. To this end, easily calculated functional features such as the maximum and the mean of standard uptake value (SUV) and total lesion glycolysis (TLG) are most commonly used for NSCLC prognostication, but their prognostic value remains controversial. Meanwhile, convolutional neural networks (CNN) are rapidly emerging as a new method for cancer image analysis, with significantly enhanced predictive power compared to hand-crafted radiomics features. Here we show that CNNs trained to perform the tumor segmentation task, with no other information than physician contours, identify a rich set of survival-related image features with remarkable prognostic value. In a retrospective study on pre-treatment PET-CT images of 96 NSCLC patients before stereotactic-body radiotherapy (SBRT), we found that the CNN segmentation algorithm (U-Net) trained for tumor segmentation in PET and CT images, contained features having strong correlation with 2- and 5-year overall and disease-specific survivals. The U-Net algorithm has not seen any other clinical information (e.g. survival, age, smoking history, etc.) than the images and the corresponding tumor contours provided by physicians. In addition, we observed the same trend by validating the U-Net features against an extramural data set provided by Stanford Cancer Institute. Furthermore, through visualization of the U-Net, we also found convincing evidence that the regions of metastasis and recurrence appear to match with the regions where the U-Net features identified patterns that predicted higher likelihoods of death. We anticipate our findings will be a starting point for more sophisticated non-intrusive patient specific cancer prognosis determination. For example, the deep learned PET/CT features can not only predict survival but also visualize high-risk regions within or adjacent to the primary tumor and hence potentially impact therapeutic outcomes by optimal selection of therapeutic strategy or first-line therapy adjustment. 
    more » « less
  3. Cybersecurity continues to be a critical aspect within every computing division, especially in the realm of operating system (OS) development. The OS resides at the lower layer above the hardware in the computing hierarchy. If the layers above the OS are well hardened, a security flaw in the OS will compromise the resources in those higher layers. Although several learning resources and courses are available for OS security, they are taught in advanced UG or graduate-level computer security classes. In this work, we develop cybersecurity educational modules that instructors can adoptin their OS courses to emphasize security in OS while teaching its concepts. The goal of this work is to engage students in learning security aspects in OS, while learning its concepts. It will give students a good understanding of different security concepts and how they are implemented in the OS. Towards this, we develop security educational modules for an OS course that will be available to the instructors for adoption in their courses. These modules are designed to be used in a UG-level OS course. To work on these modules, students should be familiar with C programming and OS concepts taught in the class. The modules are intended to be completed within the course of a semester. To achieve this goal, we organize them into three mini-projects witheach can be completed within a few weeks. We chose xv6 as the platform due to its popularity as an educational OS for developing the modules. To develop the modules, we referred to the recent version of a popular OS textbook for the security concepts. The topics discussed in it include authentication, authorization, cryptography, and distributed system security. We kept our educational modules mostly aligned with these topics except distributed system security. We also included a module for implementing a defense mechanism against buffer-overflow attacks, a famous software vulnerability. We created three mini-projects for these modules, each accompanied by proper documentation and a GitHub repository. Two versions are created for each project, one for a student’s assignment available in the repository and another as a solution version for instructors. The first project implements a user authentication system in xv6. Students will implement various specifications such as password structure with encryption and programs such as useradd, passwd, whoami, and login. The implementation guidelines are provided in the documentation, along with skeleton code. The authorization project implements the Unix-style access control system. In this project, students will modify and create various structures and functions within the xv6 kernel. The last project is to build a defense mechanism against buffer-overflow using Address Space Layout Randomization (ASLR). Students are expected to implement a random number generator and modify the executable file loader in xv6. The submission for each project is expected to demonstrate the module behavior comparable to relevant systems present in production grade OS, such as Linux. 
    more » « less
  4. Abstract

    It is critical that machine learning (ML) model predictions be trustworthy for high-throughput catalyst discovery approaches. Uncertainty quantification (UQ) methods allow estimation of the trustworthiness of an ML model, but these methods have not been well explored in the field of heterogeneous catalysis. Herein, we investigate different UQ methods applied to a crystal graph convolutional neural network to predict adsorption energies of molecules on alloys from the Open Catalyst 2020 dataset, the largest existing heterogeneous catalyst dataset. We apply three UQ methods to the adsorption energy predictions, namelyk-fold ensembling, Monte Carlo dropout, and evidential regression. The effectiveness of each UQ method is assessed based on accuracy, sharpness, dispersion, calibration, and tightness. Evidential regression is demonstrated to be a powerful approach for rapidly obtaining tunable, competitively trustworthy UQ estimates for heterogeneous catalysis applications when using neural networks. Recalibration of model uncertainties is shown to be essential in practical screening applications of catalysts using uncertainties.

     
    more » « less
  5. Poor time predictability of multicore processors has been a long-standing challenge in the real-time systems community. In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time computing on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We, therefore, propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform-the OS and hardware-guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We propose to utilize both abstractions to achieve high time predictability but without significantly sacrificing performance. We present deterministic memory-aware OS and architecture designs, including OS-level page allocator, hardware-level cache, and DRAM controller designs. We implement the proposed OS and architecture extensions on Linux and gem5 simulator. Our evaluation results, using a set of synthetic and real-world benchmarks, demonstrate the feasibility and effectiveness of our approach. 
    more » « less