skip to main content


Search for: All records

Award ID contains: 1730568

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 31, 2024
  2. In this paper, we describe the design and development of ISPeL - an Interactive System for Personalization of Learning. Central to ISPeL is topic-based authoring. A topic is a small, self-contained, reusable, and context-free content unit. Learners may study a topic provided that they have met its prerequisite dependencies. Pre- and post-tests are associated with topics. Furthermore, topics feature several practice problems to enhance student learning. A pilot implementation of three undergraduate computer science courses currently in ISPeL is also presented. 
    more » « less
  3. Many natural languages are on the decline due to the dominance of English as the language of the World Wide Web (WWW), globalized economy, socioeconomic, and political factors. Computational Linguistics offers unprecedented opportunities for preserving and promoting natural languages. However, availability of corpora is essential for leveraging the Computational Linguistics techniques. Only a handful of languages have corpora of diverse genre while most languages are resource-poor from the perspective of the availability of machine-readable corpora. Telugu is one such language, which is the official language of two southern states in India. In this paper, we provide an overview of techniques for assessing language vitality/endangerment, describe existing resources for developing corpora for the Telugu language, discuss our approach to developing corpora, and present preliminary results. 
    more » « less
  4. Deep learning is an important technique for extracting value from big data. However, the effectiveness of deep learning requires large volumes of high quality training data. In many cases, the size of training data is not large enough for effectively training a deep learning classifier. Data augmentation is a widely adopted approach for increasing the amount of training data. But the quality of the augmented data may be questionable. Therefore, a systematic evaluation of training data is critical. Furthermore, if the training data is noisy, it is necessary to separate out the noise data automatically. In this paper, we propose a deep learning classifier for automatically separating good training data from noisy data. To effectively train the deep learning classifier, the original training data need to be transformed to suit the input format of the classifier. Moreover, we investigate different data augmentation approaches to generate sufficient volume of training data from limited size original training data. We evaluated the quality of the training data through cross validation of the classification accuracy with different classification algorithms. We also check the pattern of each data item and compare the distributions of datasets. We demonstrate the effectiveness of the proposed approach through an experimental investigation of automated classification of massive biomedical images. Our approach is generic and is easily adaptable to other big data domains. 
    more » « less
  5. Deep learning is an important technique for extracting value from big data. However, the effectiveness of deep learning requires large volumes of high quality training data. In many cases, the size of training data is not large enough for effectively training a deep learning classifier. Data augmentation is a widely adopted approach for increasing the amount of training data. But the quality of the augmented data may be questionable. Therefore, a systematic evaluation of training data is critical. Furthermore, if the training data is noisy, it is necessary to separate out the noise data automatically. In this paper, we propose a deep learning classifier for automatically separating good training data from noisy data. To effectively train the deep learning classifier, the original training data need to be transformed to suit the input format of the classifier. Moreover, we investigate different data augmentation approaches to generate sufficient volume of training data from limited size original training data. We evaluated the quality of the training data through cross validation of the classification accuracy with different classification algorithms. We also check the pattern of each data item and compare the distributions of datasets. We demonstrate the effectiveness of the proposed approach through an experimental investigation of automated classification of massive biomedical images. Our approach is generic and is easily adaptable to other big data domains. 
    more » « less