- Award ID(s):
- 1853247
- Publication Date:
- NSF-PAR ID:
- 10205928
- Journal Name:
- Digital Biomarkers
- Page Range or eLocation-ID:
- 109 to 122
- ISSN:
- 2504-110X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Obeid, I. (Ed.)The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »
-
Obeid, Iyad Selesnick (Ed.)Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEGmore »
-
Abstract This project is funded by the US National Science Foundation (NSF) through their NSF RAPID program under the title “Modeling Corona Spread Using Big Data Analytics.” The project is a joint effort between the Department of Computer & Electrical Engineering and Computer Science at FAU and a research group from LexisNexis Risk Solutions. The novel coronavirus Covid-19 originated in China in early December 2019 and has rapidly spread to many countries around the globe, with the number of confirmed cases increasing every day. Covid-19 is officially a pandemic. It is a novel infection with serious clinical manifestations, including death, and it has reached at least 124 countries and territories. Although the ultimate course and impact of Covid-19 are uncertain, it is not merely possible but likely that the disease will produce enough severe illness to overwhelm the worldwide health care infrastructure. Emerging viral pandemics can place extraordinary and sustained demands on public health and health systems and on providers of essential community services. Modeling the Covid-19 pandemic spread is challenging. But there are data that can be used to project resource demands. Estimates of the reproductive number (R) of SARS-CoV-2 show that at the beginning of the epidemic, each infectedmore »
-
Abstract. Plume-SPH provides the first particle-based simulation ofvolcanic plumes. Smoothed particle hydrodynamics (SPH) has several advantagesover currently used mesh-based methods in modeling of multiphase freeboundary flows like volcanic plumes. This tool will provide more accurateeruption source terms to users of volcanic ash transport anddispersion models (VATDs), greatly improving volcanic ash forecasts. The accuracy ofthese terms is crucial for forecasts from VATDs, and the 3-D SPH modelpresented here will provide better numerical accuracy. As an initial effortto exploit the feasibility and advantages of SPH in volcanic plume modeling,we adopt a relatively simple physics model (3-D dusty-gas dynamic modelassuming well-mixed eruption material, dynamic equilibrium and thermodynamicequilibrium between erupted material and air that entrained into the plume,and minimal effect of winds) targeted at capturing the salient features of avolcanic plume. The documented open-source code is easily obtained andextended to incorporate other models of physics of interest to the largecommunity of researchers investigating multiphase free boundary flows ofvolcanic or other origins.
The Plume-SPH code (https://doi.org/10.5281/zenodo.572819) also incorporates several newly developed techniques inSPH needed to address numerical challenges in simulating multiphasecompressible turbulent flow. The code should thus be also of general interestto the much larger community of researchers using and developing SPH-basedtools. In particular,more »
The core solver of our model is parallelized with the message passinginterface (MPI) obtaining good weak and strong scalability using novel techniquesfor data management using space-filling curves (SFCs), object creationtime-based indexing and hash-table-based storage schemes. These techniques areof interest to researchers engaged in developing particles in cell-typemethods. The code is first verified by 1-D shock tube tests, then bycomparing velocity and concentration distribution along the central axis andon the transverse cross with experimental results of JPUE (jet or plume thatis ejected from a nozzle into a uniform environment). Profiles of severalintegrated variables are compared with those calculated by existing 3-D plumemodels for an eruption with the same mass eruption rate (MER) estimated forthe Mt. Pinatubo eruption of 15 June 1991. Our results are consistent withexisting 3-D plume models. Analysis of the plume evolution processdemonstrates that this model is able to reproduce the physics of plumedevelopment.
-
Background Comprehensive exams such as the Dean-Woodcock Neuropsychological Assessment System, the Global Deterioration Scale, and the Boston Diagnostic Aphasia Examination are the gold standard for doctors and clinicians in the preliminary assessment and monitoring of neurocognitive function in conditions such as neurodegenerative diseases and acquired brain injuries (ABIs). In recent years, there has been an increased focus on implementing these exams on mobile devices to benefit from their configurable built-in sensors, in addition to scoring, interpretation, and storage capabilities. As smartphones become more accepted in health care among both users and clinicians, the ability to use device information (eg, device position, screen interactions, and app usage) for subject monitoring also increases. Sensor-based assessments (eg, functional gait using a mobile device’s accelerometer and/or gyroscope or collection of speech samples using recordings from the device’s microphone) include the potential for enhanced information for diagnoses of neurological conditions; mapping the development of these conditions over time; and monitoring efficient, evidence-based rehabilitation programs. Objective This paper provides an overview of neurocognitive conditions and relevant functions of interest, analysis of recent results using smartphone and/or tablet built-in sensor information for the assessment of these different neurocognitive conditions, and how human-device interactions and the assessment andmore »