Clinical diagnosis of stuttering requires an assessment by a licensed speech-language pathologist. However, this process is time-consuming and requires clinicians with training and experience in stuttering and fluency disorders. Unfortunately, only a small percentage of speech-language pathologists report being comfortable working with individuals who stutter, which is inadequate to accommodate for the 80 million individuals who stutter worldwide. Developing machine learning models for detecting stuttered speech would enable universal and automated screening for stuttering, enabling speech pathologists to identify and follow up with patients who are most likely to be diagnosed with a stuttering speech disorder. Previous research in this area has predominantly focused on utterance-level detection, which is not sufficient for clinical settings where word-level annotation of stuttering is the norm. In this study, we curated a stuttered speech dataset with word-level annotations and introduced a word-level stuttering speech detection model leveraging self-supervised speech models. Our evaluation demonstrates that our model surpasses previous approaches in word-level stuttering speech detection. Additionally, we conducted an extensive ablation analysis of our method, providing insight into the most important aspects of adapting self-supervised speech models for stuttered speech detection.
more »
« less
Analyzing Machine Learning Models that Predict Mental Illnesses from Social Media Text
Previous studies, both in psychology and linguistics, have shown that individuals with mental illnesses show deviations from normal language use, that these differences can be used to make predictions, and used as a diagnostic tool. Recent studies have shown that machine learning can be used to predict people with mental illnesses based on their writing. However, little attention is paid to the interpretability of the machine learning models. In this talk we will describe our analysis of the machine learning models, the different language patterns that distinguish individuals having mental illnesses from a control group, and the associated privacy concerns. We use a dataset of Tweets that are collected from users who reported a diagnosis of a mental illnesses on Twitter. Given the self-reported nature of the dataset, it is possible that some of these individuals are actively talking about their mental illness on social media. We investigated if the machine learning models are detecting the active mentions of the mental illness or if they are detecting more complex language patterns. We then conducted a feature analysis by creating feature vectors using word unigrams, part of speech tags and word clusters and used feature importance measures and statistical methods to identify important features. This analysis serves two purposes: to understand the machine learning model, and to discover language patterns that would help in identifying people with mental illnesses. Finally, we conducted a qualitative analysis of the misclassifications to understand the potential causes for the misclassifications.
more »
« less
- Award ID(s):
- 1711773
- PAR ID:
- 10100200
- Date Published:
- Journal Name:
- Privacy Enhancing Technologies Symposium
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Background People’s health-related knowledge influences health outcomes, as this knowledge may influence whether individuals follow advice from their doctors or public health agencies. Yet, little attention has been paid to where people obtain health information and how these information sources relate to the quality of knowledge. Objective We aim to discover what information sources people use to learn about health conditions, how these sources relate to the quality of their health knowledge, and how both the number of information sources and health knowledge change over time. Methods We surveyed 200 different individuals at 12 time points from March through September 2020. At each time point, we elicited participants’ knowledge about causes, risk factors, and preventative interventions for 8 viral (Ebola, common cold, COVID-19, Zika) and nonviral (food allergies, amyotrophic lateral sclerosis [ALS], strep throat, stroke) illnesses. Participants were further asked how they learned about each illness and to rate how much they trust various sources of health information. Results We found that participants used different information sources to obtain health information about common illnesses (food allergies, strep throat, stroke) compared to emerging illnesses (Ebola, common cold, COVID-19, Zika). Participants relied mainly on news media, government agencies, and social media for information about emerging illnesses, while learning about common illnesses from family, friends, and medical professionals. Participants relied on social media for information about COVID-19, with their knowledge accuracy of COVID-19 declining over the course of the pandemic. The number of information sources participants used was positively correlated with health knowledge quality, though there was no relationship with the specific source types consulted. Conclusions Building on prior work on health information seeking and factors affecting health knowledge, we now find that people systematically consult different types of information sources by illness type and that the number of information sources people use affects the quality of individuals’ health knowledge. Interventions to disseminate health information may need to be targeted to where individuals are likely to seek out information, and these information sources differ systematically by illness type.more » « less
-
Chen, Chi-Hua (Ed.)Mobile sensing data processed using machine learning models can passively and remotely assess mental health symptoms from the context of patients’ lives. Prior work has trained models using data from single longitudinal studies, collected from demographically homogeneous populations, over short time periods, using a single data collection platform or mobile application. The generalizability of model performance across studies has not been assessed. This study presents a first analysis to understand if models trained using combined longitudinal study data to predict mental health symptoms generalize across current publicly available data. We combined data from the CrossCheck (individuals living with schizophrenia) and StudentLife (university students) studies. In addition to assessing generalizability, we explored if personalizing models to align mobile sensing data, and oversampling less-represented severe symptoms, improved model performance. Leave-one-subject-out cross-validation (LOSO-CV) results were reported. Two symptoms (sleep quality and stress) had similar question-response structures across studies and were used as outcomes to explore cross-dataset prediction. Models trained with combined data were more likely to be predictive (significant improvement over predicting training data mean) than models trained with single-study data. Expected model performance improved if the distance between training and validation feature distributions decreased using combined versus single-study data. Personalization aligned each LOSO-CV participant with training data, but only improved predicting CrossCheck stress. Oversampling significantly improved severe symptom classification sensitivity and positive predictive value, but decreased model specificity. Taken together, these results show that machine learning models trained on combined longitudinal study data may generalize across heterogeneous datasets. We encourage researchers to disseminate collected de-identified mobile sensing and mental health symptom data, and further standardize data types collected across studies to enable better assessment of model generalizability.more » « less
-
Many patients with mental disorders take dietary supplement, but their use patterns remain unclear. In this study, we developed a method to detect signals of associations between dietary supplement intake and mental disorder in Twitter data. We developed an annotated dataset and trained a convolutional neural network classifier that can identify language use pattern of dietary supplement intake with an F1-score of 0.899, a precision of 0.900, and a recall of 0.900. Using the classifier, we discovered that melatonin and vitamin D were the most commonly used supplements among Twitter users who self-diagnosed mental disorders. Sentiment analysis using Linguistic Inquiry and Word Count has shown that among Twitter users who posted mental disorder self-diagnosis, users who indicated supplement intake are more active and express more negative emotions and fewer positive emotions than those who have not mentioned supplement intake.more » « less
-
The act of appearing kind or helpful via the use of but having a feeling of superiority condescending and patronizing language can have have serious mental health implications to those that experience it. Thus, detecting this condescending and patronizing language online can be useful for online moderation systems. Thus, in this manuscript, we describe the system developed by Team UTSA SemEval-2022 Task 4, Detecting Patronizing and Condescending Language. Our approach explores the use of several deep learning architectures including RoBERTa, convolutions neural networks, and Bidirectional Long Short-Term Memory Networks. Furthermore, we explore simple and effective methods to create ensembles of neural network models. Overall, we experimented with several ensemble models and found that the a simple combination of five RoBERTa models achieved an F-score of .6441 on the development dataset and .5745 on the final test dataset. Finally, we also performed a comprehensive error analysis to better understand the limitations of the model and provide ideas for further research.more » « less
An official website of the United States government

