skip to main content


Search for: All records

Award ID contains: 1916204

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Motivated by recent work involving the analysis of biomedical imaging data, we present a novel procedure for constructing simultaneous confidence corridors for the mean of imaging data. We propose to use flexible bivariate splines over triangulations to handle an irregular domain of the images that is common in brain imaging studies and in other biomedical imaging applications. The proposed spline estimators of the mean functions are shown to be consistent and asymptotically normal under some regularity conditions. We also provide a computationally efficient estimator of the covariance function and derive its uniform consistency. The procedure is also extended to the two‐sample case in which we focus on comparing the mean functions from two populations of imaging data. Through Monte Carlo simulation studies, we examine the finite sample performance of the proposed method. Finally, the proposed method is applied to analyze brain positron emission tomography data in two different studies. One data set used in preparation of this article was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

     
    more » « less
  2. null (Ed.)
  3. null (Ed.)
    Over the past few months, the outbreak of Coronavirus disease (COVID-19) has been expanding over the world. A reliable and accurate dataset of the cases is vital for scientists to conduct related research and policy-makers to make better decisions. We collect the United States COVID-19 daily reported data from four open sources: the New York Times, the COVID-19 Data Repository by Johns Hopkins University, the COVID Tracking Project at the Atlantic, and the USAFacts, then compare the similarities and differences among them. To obtain reliable data for further analysis, we first examine the cyclical pattern and the following anomalies, which frequently occur in the reported cases: (1) the order dependencies violation, (2) the point or period anomalies, and (3) the issue of reporting delay. To address these detected issues, we propose the corresponding repairing methods and procedures if corrections are necessary. In addition, we integrate the COVID-19 reported cases with the county-level auxiliary information of the local features from official sources, such as health infrastructure, demographic, socioeconomic, and environmental information, which are also essential for understanding the spread of the virus. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)
    The coronavirus disease 2019 (COVID-19) pandemic caused by the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has placed epidemic modeling at the center of attention of public policymaking. Predicting the severity and speed of transmission of COVID-19 is crucial to resource management and developing strategies to deal with this epidemic. Based on the available data from current and previous outbreaks, many efforts have been made to develop epidemiological models, including statistical models, computer simulations, mathematical representations of the virus and its impacts, and many more. Despite their usefulness, modeling and forecasting the spread of COVID-19 remains a challenge. In this article, we give an overview of the unique features and issues of COVID-19 data and how they impact epidemic modeling and projection. In addition, we illustrate how various models could be connected to each other. Moreover, we provide new data science perspectives on the challenges of COVID-19 forecasting, from data collection, curation, and validation to the limitations of models, as well as the uncertainty of the forecast. Finally, we discuss some data science practices that are crucial to more robust and accurate epidemic forecasting. 
    more » « less
  6. null (Ed.)
    Background: Fluid intelligence (FI) involves abstract problem-solving without prior knowledge. Greater age-related FI decline increases Alzheimer’s disease (AD) risk, and recent studies suggest that certain dietary regimens may influence rates of decline. However, it is uncertain how long-term food consumption affects FI among adults with or without familial history of AD (FH) or APOE4 (ɛ4). Objective: Observe how the total diet is associated with long-term cognition among mid- to late-life populations at-risk and not-at-risk for AD. Methods: Among 1,787 mid-to-late-aged adult UK Biobank participants, 10-year FI trajectories were modeled and regressed onto the total diet based on self-reported intake of 49 whole foods from a Food Frequency Questionnaire (FFQ). Results: Daily cheese intake strongly predicted better FIT scores over time (FH-: β= 0.207, p < 0.001; ɛ4–: β= 0.073, p = 0.008; ɛ4+: β= 0.162, p = 0.001). Alcohol of any type daily also appeared beneficial (ɛ4+: β= 0.101, p = 0.022) and red wine was sometimes additionally protective (FH+: β= 0.100, p = 0.014; ɛ4–: β= 0.59, p = 0.039). Consuming lamb weekly was associated with improved outcomes (FH-: β= 0.066, p = 0.008; ɛ4+: β= 0.097, p = 0.044). Among at risk groups, added salt correlated with decreased performance (FH+: β= –0.114, p = 0.004; ɛ4+: β= –0.121, p = 0.009). Conclusion: Modifying meal plans may help minimize cognitive decline. We observed that added salt may put at-risk individuals at greater risk, but did not observe similar interactions among FH- and AD- individuals. Observations further suggest in risk status-dependent manners that adding cheese and red wine to the diet daily, and lamb on a weekly basis, may also improve long-term cognitive outcomes. 
    more » « less