skip to main content


Search for: All records

Award ID contains: 1920920

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Objective

    This study aims to establish an informative dynamic prediction model of treatment outcomes using follow-up records of tuberculosis (TB) patients, which can timely detect cases when the current treatment plan may not be effective.

    Materials and Methods

    We used 122 267 follow-up records from 17 958 new cases of pulmonary TB in the Republic of Moldova. A dynamic prediction framework integrating landmark modeling and machine learning algorithms was designed to predict patient outcomes during the course of treatment. Sensitivity and positive predictive value (PPV) were calculated to evaluate performance of the model at critical time points. New measures were defined to determine when follow-up laboratory tests should be conducted to obtain most informative results.

    Results

    The random-forest algorithm performed better than support vector machine and penalized multinomial logistic regression models for predicting TB treatment outcomes. For all 3 outcome classes (ie, cured, not cured, and died after 24 months following treatment initiation), sensitivity and PPV of prediction models improved as more follow-up information was collected. Specifically, sensitivity and PPV increased from 0.55 to 0.84 and from 0.32 to 0.88, respectively, for the not cured class.

    Conclusion

    The dynamic prediction framework utilizes longitudinal laboratory test results to predict patient outcomes at various landmarks. Sputum culture and smear results are among the important variables for prediction; however, the most recent sputum result is not always the most informative one. This framework can potentially facilitate a more effective treatment monitoring program and provide insights for policymakers toward improved guidelines on follow-up tests.

     
    more » « less
  2. Abstract

    Over the past several years, a multitude of methods to measure the fairness of a machine learning model have been proposed. However, despite the growing number of publications and implementations, there is still a critical lack of literature that explains the interplay of fair machine learning with the social sciences of philosophy, sociology, and law. We hope to remedy this issue by accumulating and expounding upon the thoughts and discussions of fair machine learning produced by both social and formal (i.e., machine learning and statistics) sciences in this field guide. Specifically, in addition to giving the mathematical and algorithmic backgrounds of several popular statistics-based fair machine learning metrics used in fair machine learning, we explain the underlying philosophical and legal thoughts that support them. Furthermore, we explore several criticisms of the current approaches to fair machine learning from sociological, philosophical, and legal viewpoints. It is our hope that this field guide helps machine learning practitioners identify and remediate cases where algorithms violate human rights and values.

     
    more » « less
  3. Video Paragraph Captioning aims to generate a multi-sentence description of an untrimmed video with multiple temporal event locations in a coherent storytelling. Following the human perception process, where the scene is effectively understood by decomposing it into visual (e.g. human, animal) and non-visual components (e.g. action, relations) under the mutual influence of vision and language, we first propose a visual-linguistic (VL) feature. In the proposed VL feature, the scene is modeled by three modalities including (i) a global visual environment; (ii) local visual main agents; (iii) linguistic scene elements. We then introduce an autoregressive Transformer-in-Transformer (TinT) to simultaneously capture the semantic coherence of intra- and inter-event contents within a video. Finally, we present a new VL contrastive loss function to guarantee the learnt embedding features are consistent with the captions semantics. Comprehensive experiments and extensive ablation studies on the ActivityNet Captions and YouCookII datasets show that the proposed Visual-Linguistic Transformer-in-Transform (VLTinT) outperforms previous state-of-the-art methods in terms of accuracy and diversity. The source code is made publicly available at: https://github.com/UARK-AICV/VLTinT. 
    more » « less
    Free, publicly-accessible full text available June 27, 2024
  4. Machine learning models to predict refugee crisis situations are still lacking. The model proposed in this work uses a set of predictive features that are indicative of the sociocultural, socioeconomic, and economic characteristics that exist within each country and region. Twenty-eight features were collected for specific countries and years. The feature set was tested in experiments using ordinary least squares regression based on regional subsets. Potential location-based features stood out in our results, such as the global peace index, access to electricity, access to basic water, media censorship, and healthcare. The model performed best for the region of Europe, wherein the features with the most predictive power included access to justice and homicide rate. Corruption features stood out in both Africa and Asia, while population features were dominant in the Americas. Model performance metrics are provided for each experiment. Limitations of this dataset are discussed, as are steps for future work. 
    more » « less
    Free, publicly-accessible full text available May 28, 2024
  5. Free, publicly-accessible full text available May 1, 2024