skip to main content


Title: Capturing Laughter and Smiles under Genuine Amusement vs. Negative Emotion
Smiling and laughter are typically associated with amusement. If they occur under negative emotions, systems responding naively may confuse an uncomfortable smile or laugh with an amused state. We present a passive text and video elicitation task and collect spontaneous laughter and smiles in reaction to amusing and negative experiences, using standard, ubiquitous sensors (webcam and microphone), along with participant self-ratings. While we rely on a state-of-the-art smile recognizer, for laughter recognition our transfer learning architecture enhanced on modest data outperforms other models with up to 85% accuracy (F1 = 0.86), suggesting this technique as promising for improving affect models. Subsequently, we analyze and automatically predict laughter as amused vs. negative. However, contrasting with prior findings for acted data, for this spontaneously elicited dataset classifying laughter by emotional valence is not satisfactory.  more » « less
Award ID(s):
1851591
NSF-PAR ID:
10184609
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2020 Workshop on Human-Centered Computational Sensing - IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Duchenne smile hypothesis is that smiles that include eye constriction (AU6) are the product of genuine positive emotion, whereas smiles that do not are either falsified or related to negative emotion. This hypothesis has become very influential and is often used in scientific and applied settings to justify the inference that a smile is either true or false. However, empirical support for this hypothesis has been equivocal and some researchers have proposed that, rather than being a reliable indicator of positive emotion, AU6 may just be an artifact produced by intense smiles. Initial support for this proposal has been found when comparing smiles related to genuine and feigned positive emotion; however, it has not yet been examined when comparing smiles related to genuine positive and negative emotion. The current study addressed this gap in the literature by examining spontaneous smiles from 136 participants during the elicitation of amusement, embarrassment, fear, and pain (from the BP4D+ dataset). Bayesian multilevel regression models were used to quantify the associations between AU6 and self-reported amusement while controlling for smile intensity. Models were estimated to infer amusement from AU6 and to explain the intensity of AU6 using amusement. In both cases, controlling for smile intensity substantially reduced the hypothesized association, whereas the effect of smile intensity itself was quite large and reliable. These results provide further evidence that the Duchenne smile is likely an artifact of smile intensity rather than a reliable and unique indicator of genuine positive emotion. 
    more » « less
  2. Abstract Motivation

    The crux of molecular property prediction is to generate meaningful representations of the molecules. One promising route is to exploit the molecular graph structure through graph neural networks (GNNs). Both atoms and bonds significantly affect the chemical properties of a molecule, so an expressive model ought to exploit both node (atom) and edge (bond) information simultaneously. Inspired by this observation, we explore the multi-view modeling with GNN (MVGNN) to form a novel paralleled framework, which considers both atoms and bonds equally important when learning molecular representations. In specific, one view is atom-central and the other view is bond-central, then the two views are circulated via specifically designed components to enable more accurate predictions. To further enhance the expressive power of MVGNN, we propose a cross-dependent message-passing scheme to enhance information communication of different views. The overall framework is termed as CD-MVGNN.

    Results

    We theoretically justify the expressiveness of the proposed model in terms of distinguishing non-isomorphism graphs. Extensive experiments demonstrate that CD-MVGNN achieves remarkably superior performance over the state-of-the-art models on various challenging benchmarks. Meanwhile, visualization results of the node importance are consistent with prior knowledge, which confirms the interpretability power of CD-MVGNN.

    Availability and implementation

    The code and data underlying this work are available in GitHub at https://github.com/uta-smile/CD-MVGNN.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

     
    more » « less
  3. Abstract. Future changes in the El Niño–Southern Oscillation (ENSO) are uncertain, both because future projections differ between climate models and because the large internal variability of ENSO clouds the diagnosis of forced changes in observations and individual climate model simulations. By leveraging 14 single model initial-condition large ensembles (SMILEs), we robustly isolate the time-evolving response of ENSO sea surface temperature (SST) variability to anthropogenic forcing from internal variability in each SMILE. We find nonlinear changes in time in many models and considerable inter-model differences in projected changes in ENSO and the mean-state tropical Pacific zonal SST gradient. We demonstrate a linear relationship between the change in ENSO SST variability and the tropical Pacific zonal SST gradient, although forced changes in the tropical Pacific SST gradient often occur later in the 21st century than changes in ENSO SST variability, which can lead to departures from the linear relationship. Single-forcing SMILEs show a potential contribution of anthropogenic forcing (aerosols and greenhouse gases) to historical changes in ENSO SST variability, while the observed historical strengthening of the tropical Pacific SST gradient sits on the edge of the model spread for those models for which single-forcing SMILEs are available. Our results highlight the value of SMILEs for investigating time-dependent forced responses and inter-model differences in ENSO projections. The nonlinear changes in ENSO SST variability found in many models demonstrate the importance of characterizing this time-dependent behavior, as it implies that ENSO impacts may vary dramatically throughout the 21st century. 
    more » « less
  4. Wang, L. ; Dou, Q. ; Fletcher, P.T. ; Speidel, S. ; Li, S. (Ed.)
    Model calibration measures the agreement between the predicted probability estimates and the true correctness likelihood. Proper model calibration is vital for high-risk applications. Unfortunately, modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability. Medical image segmentation particularly suffers from this due to the natural uncertainty of tissue boundaries. This is exasperated by their loss functions, which favor overconfidence in the majority classes. We address these challenges with DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels. Our experiments demonstrate that our DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation. Our results show that our method can consistently achieve better calibration, higher accuracy, and faster inference times than these methods, especially on rarer classes. This performance is attributed to our domain-aware regularization to inform semantic model calibration. These findings show the importance of semantic ties between class labels in building confidence in deep learning models. The framework has the potential to improve the trustworthiness and reliability of generic medical image segmentation models. The code for this article is available at: https://github.com/lab-smile/DOMINO. 
    more » « less
  5. We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, it relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like "make older/younger", "make bespectacled", "add smile", among others, surprisingly well - sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging in the rise of deep learning. 
    more » « less