skip to main content

Title: Understanding and predicting COVID-19 clinical trial completion vs. cessation
As of March 30 2021, over 5,193 COVID-19 clinical trials have been registered through Among them, 191 trials were terminated, suspended, or withdrawn (indicating the cessation of the study). On the other hand, 909 trials have been completed (indicating the completion of the study). In this study, we propose to study underlying factors of COVID-19 trial completion vs . cessation, and design predictive models to accurately predict whether a COVID-19 trial may complete or cease in the future. We collect 4,441 COVID-19 trials from to build a testbed, and design four types of features to characterize clinical trial administration, eligibility, study information, criteria, drug types, study keywords, as well as embedding features commonly used in the state-of-the-art machine learning. Our study shows that drug features and study keywords are most informative features, but all four types of features are essential for accurate trial prediction. By using predictive models, our approach achieves more than 0.87 AUC (Area Under the Curve) score and 0.81 balanced accuracy to correctly predict COVID-19 clinical trial completion vs . cessation. Our research shows that computational methods can deliver effective features to understand difference between completed vs . ceased COVID-19 trials. In addition, such models more » can also predict COVID-19 trial status with satisfactory accuracy, and help stakeholders better plan trials and minimize costs. « less
Gadekallu, Thippa Reddy
Award ID(s):
1763452 2027339
Publication Date:
Journal Name:
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Kretzschmar, Mirjam E. (Ed.)
    Background Development of an effective antiviral drug for Coronavirus Disease 2019 (COVID-19) is a global health priority. Although several candidate drugs have been identified through in vitro and in vivo models, consistent and compelling evidence from clinical studies is limited. The lack of evidence from clinical trials may stem in part from the imperfect design of the trials. We investigated how clinical trials for antivirals need to be designed, especially focusing on the sample size in randomized controlled trials. Methods and findings A modeling study was conducted to help understand the reasons behind inconsistent clinical trial findings and to design better clinical trials. We first analyzed longitudinal viral load data for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) without antiviral treatment by use of a within-host virus dynamics model. The fitted viral load was categorized into 3 different groups by a clustering approach. Comparison of the estimated parameters showed that the 3 distinct groups were characterized by different virus decay rates ( p -value < 0.001). The mean decay rates were 1.17 d −1 (95% CI: 1.06 to 1.27 d −1 ), 0.777 d −1 (0.716 to 0.838 d −1 ), and 0.450 d −1 (0.378 to 0.522 d −1more ») for the 3 groups, respectively. Such heterogeneity in virus dynamics could be a confounding variable if it is associated with treatment allocation in compassionate use programs (i.e., observational studies). Subsequently, we mimicked randomized controlled trials of antivirals by simulation. An antiviral effect causing a 95% to 99% reduction in viral replication was added to the model. To be realistic, we assumed that randomization and treatment are initiated with some time lag after symptom onset. Using the duration of virus shedding as an outcome, the sample size to detect a statistically significant mean difference between the treatment and placebo groups (1:1 allocation) was 13,603 and 11,670 (when the antiviral effect was 95% and 99%, respectively) per group if all patients are enrolled regardless of timing of randomization. The sample size was reduced to 584 and 458 (when the antiviral effect was 95% and 99%, respectively) if only patients who are treated within 1 day of symptom onset are enrolled. We confirmed the sample size was similarly reduced when using cumulative viral load in log scale as an outcome. We used a conventional virus dynamics model, which may not fully reflect the detailed mechanisms of viral dynamics of SARS-CoV-2. The model needs to be calibrated in terms of both parameter settings and model structure, which would yield more reliable sample size calculation. Conclusions In this study, we found that estimated association in observational studies can be biased due to large heterogeneity in viral dynamics among infected individuals, and statistically significant effect in randomized controlled trials may be difficult to be detected due to small sample size. The sample size can be dramatically reduced by recruiting patients immediately after developing symptoms. We believe this is the first study investigated the study design of clinical trials for antiviral treatment using the viral dynamics model.« less
  2. Abstract In this study, we propose to use machine learning to understand terminated clinical trials. Our goal is to answer two fundamental questions: (1) what are common factors/markers associated to terminated clinical trials? and (2) how to accurately predict whether a clinical trial may be terminated or not? The answer to the first question provides effective ways to understand characteristics of terminated trials for stakeholders to better plan their trials; and the answer to the second question can direct estimate the chance of success of a clinical trial in order to minimize costs. By using 311,260 trials to build a testbed with 68,999 samples, we use feature engineering to create 640 features, reflecting clinical trial administration, eligibility, study information, criteria etc. Using feature ranking, a handful of features, such as trial eligibility, trial inclusion/exclusion criteria, sponsor types etc. , are found to be related to the clinical trial termination. By using sampling and ensemble learning, we achieve over 67% Balanced Accuracy and over 0.73 AUC (Area Under the Curve) scores to correctly predict clinical trial termination, indicating that machine learning can help achieve satisfactory prediction results for clinical trial study.
  3. Clinical trials are crucial for the advancement of treatment and knowledge within the medical community. Since 2007, US federal government took the initiative and requires organizations sponsoring clinical trials with at least one site in the United States to submit information on these clinical trials to the database, resulting in a rich source of information for clinical trial research. Nevertheless, only a handful of analytic studies have been carried out to understand this valuable data source. In this study, we propose to use network analysis to understand infectious disease clinical trial research. Our goal is to answer two important questions: (1) what are the concentrations and characteristics of infectious disease clinical trail research? and (2) how to accurately predict what type of clinical trials a sponsor (or an investigator) is interested in? The answers to the first question provide effective ways to summarize clinical trial research related to particular disease(s), and the answers to the second question help match clinical trial sponsors and investigators for information recommendation. By using 4,228 clinical trails as the test bed, our study involves 4,864 sponsors and 1,879 research areas characterized by Medical Subject Heading (MeSH) keywords. We extract a set of network measuresmore »to show patterns of infectious disease clinical trials, and design a new community based link prediction approach to predict sponsors' interests, with significant improvement compared to baselines. This trans-formative study concludes that using network analysis can tremendously help the understanding of clinical trial research for effective summarization, characterization, and prediction.« less
  4. The COVID-19 pandemic has highlighted the need to quickly and reliably prioritize clinically approved compounds for their potential effectiveness for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. Here, we deployed algorithms relying on artificial intelligence, network diffusion, and network proximity, tasking each of them to rank 6,340 drugs for their expected efficacy against SARS-CoV-2. To test the predictions, we used as ground truth 918 drugs experimentally screened in VeroE6 cells, as well as the list of drugs in clinical trials that capture the medical community’s assessment of drugs with potential COVID-19 efficacy. We find that no single predictive algorithm offers consistently reliable outcomes across all datasets and metrics. This outcome prompted us to develop a multimodal technology that fuses the predictions of all algorithms, finding that a consensus among the different predictive methods consistently exceeds the performance of the best individual pipelines. We screened in human cells the top-ranked drugs, obtaining a 62% success rate, in contrast to the 0.8% hit rate of nonguided screenings. Of the six drugs that reduced viral infection, four could be directly repurposed to treat COVID-19, proposing novel treatments for COVID-19. We also found that 76 of the 77 drugs that successfully reduced viralmore »infection do not bind the proteins targeted by SARS-CoV-2, indicating that these network drugs rely on network-based mechanisms that cannot be identified using docking-based strategies. These advances offer a methodological pathway to identify repurposable drugs for future pathogens and neglected diseases underserved by the costs and extended timeline of de novo drug development.« less
  5. Objectively differentiating patient mental states based on electrical activity, as opposed to overt behavior, is a fundamental neuroscience problem with medical applications, such as identifying patients in locked-in state vs. coma. Electroencephalography (EEG), which detects millisecond-level changes in brain activity across a range of frequencies, allows for assessment of external stimulus processing by the brain in a non-invasive manner. We applied machine learning methods to 26-channel EEG data of 24 fluent Deaf signers watching videos of sign language sentences (comprehension condition), and the same videos reversed in time (non-comprehension condition), to objectively separate vision-based high-level cognition states. While spectrotemporal parameters of the stimuli were identical in comprehension vs. non-comprehension conditions, the neural responses of participants varied based on their ability to linguistically decode visual data. We aimed to determine which subset of parameters (specific scalp regions or frequency ranges) would be necessary and sufficient for high classification accuracy of comprehension state. Optical flow, characterizing distribution of velocities of objects in an image, was calculated for each pixel of stimulus videos using MATLAB Vision toolbox. Coherence between optical flow in the stimulus and EEG neural response (per video, per participant) was then computed using canonical component analysis with NoiseTools toolbox. Peakmore »correlations were extracted for each frequency for each electrode, participant, and video. A set of standard ML algorithms were applied to the entire dataset (26 channels, frequencies from .2 Hz to 12.4 Hz, binned in 1 Hz increments), with consistent out-of-sample 100% accuracy for frequencies in .2-1 Hz range for all regions, and above 80% accuracy for frequencies < 4 Hz. Sparse Optimal Scoring (SOS) was then applied to the EEG data to reduce the dimensionality of the features and improve model interpretability. SOS with elastic-net penalty resulted in out-of-sample classification accuracy of 98.89%. The sparsity pattern in the model indicated that frequencies between 0.2–4 Hz were primarily used in the classification, suggesting that underlying data may be group sparse. Further, SOS with group lasso penalty was applied to regional subsets of electrodes (anterior, posterior, left, right). All trials achieved greater than 97% out-of-sample classification accuracy. The sparsity patterns from the trials using 1 Hz bins over individual regions consistently indicated frequencies between 0.2–1 Hz were primarily used in the classification, with anterior and left regions performing the best with 98.89% and 99.17% classification accuracy, respectively. While the sparsity pattern may not be the unique optimal model for a given trial, the high classification accuracy indicates that these models have accurately identified common neural responses to visual linguistic stimuli. Cortical tracking of spectro-temporal change in the visual signal of sign language appears to rely on lower frequencies proportional to the N400/P600 time-domain evoked response potentials, indicating that visual language comprehension is grounded in predictive processing mechanisms.« less