skip to main content


Title: Reconsidering the Duchenne Smile: Indicator of Positive Emotion or Artifact of Smile Intensity?
The Duchenne smile hypothesis is that smiles that include eye constriction (AU6) are the product of genuine positive emotion, whereas smiles that do not are either falsified or related to negative emotion. This hypothesis has become very influential and is often used in scientific and applied settings to justify the inference that a smile is either true or false. However, empirical support for this hypothesis has been equivocal and some researchers have proposed that, rather than being a reliable indicator of positive emotion, AU6 may just be an artifact produced by intense smiles. Initial support for this proposal has been found when comparing smiles related to genuine and feigned positive emotion; however, it has not yet been examined when comparing smiles related to genuine positive and negative emotion. The current study addressed this gap in the literature by examining spontaneous smiles from 136 participants during the elicitation of amusement, embarrassment, fear, and pain (from the BP4D+ dataset). Bayesian multilevel regression models were used to quantify the associations between AU6 and self-reported amusement while controlling for smile intensity. Models were estimated to infer amusement from AU6 and to explain the intensity of AU6 using amusement. In both cases, controlling for smile intensity substantially reduced the hypothesized association, whereas the effect of smile intensity itself was quite large and reliable. These results provide further evidence that the Duchenne smile is likely an artifact of smile intensity rather than a reliable and unique indicator of genuine positive emotion.  more » « less
Award ID(s):
1721667
NSF-PAR ID:
10170565
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)
Page Range / eLocation ID:
594 to 599
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Smiling and laughter are typically associated with amusement. If they occur under negative emotions, systems responding naively may confuse an uncomfortable smile or laugh with an amused state. We present a passive text and video elicitation task and collect spontaneous laughter and smiles in reaction to amusing and negative experiences, using standard, ubiquitous sensors (webcam and microphone), along with participant self-ratings. While we rely on a state-of-the-art smile recognizer, for laughter recognition our transfer learning architecture enhanced on modest data outperforms other models with up to 85% accuracy (F1 = 0.86), suggesting this technique as promising for improving affect models. Subsequently, we analyze and automatically predict laughter as amused vs. negative. However, contrasting with prior findings for acted data, for this spontaneously elicited dataset classifying laughter by emotional valence is not satisfactory. 
    more » « less
  2. Obeid, I. (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do not have access to such data resources must rely on techniques in which existing models can be adapted to new datasets [6]. A preliminary version of this breast corpus release was tested in a pilot study using a baseline machine learning system, ResNet18 [7], that leverages several open-source Python tools. The pilot corpus was divided into three sets: train, development, and evaluation. Portions of these slides were manually annotated [1] using the nine labels in Table 1 [8] to identify five to ten examples of pathological features on each slide. Not every pathological feature is annotated, meaning excluded areas can include focuses particular to these labels that are not used for training. A summary of the number of patches within each label is given in Table 2. To maintain a balanced training set, 1,000 patches of each label were used to train the machine learning model. Throughout all sets, only annotated patches were involved in model development. The performance of this model in identifying all the patches in the evaluation set can be seen in the confusion matrix of classification accuracy in Table 3. The highest performing labels were background, 97% correct identification, and artifact, 76% correct identification. A correlation exists between labels with more than 6,000 development patches and accurate performance on the evaluation set. Additionally, these results indicated a need to further refine the annotation of invasive ductal carcinoma (“indc”), inflammation (“infl”), nonneoplastic features (“nneo”), normal (“norm”) and suspicious (“susp”). This pilot experiment motivated changes to the corpus that will be discussed in detail in this poster presentation. To increase the accuracy of the machine learning model, we modified how we addressed underperforming labels. One common source of error arose with how non-background labels were converted into patches. Large areas of background within other labels were isolated within a patch resulting in connective tissue misrepresenting a non-background label. In response, the annotation overlay margins were revised to exclude benign connective tissue in non-background labels. Corresponding patient reports and supporting immunohistochemical stains further guided annotation reviews. The microscopic diagnoses given by the primary pathologist in these reports detail the pathological findings within each tissue site, but not within each specific slide. The microscopic diagnoses informed revisions specifically targeting annotated regions classified as cancerous, ensuring that the labels “indc” and “dcis” were used only in situations where a micropathologist diagnosed it as such. Further differentiation of cancerous and precancerous labels, as well as the location of their focus on a slide, could be accomplished with supplemental immunohistochemically (IHC) stained slides. When distinguishing whether a focus is a nonneoplastic feature versus a cancerous growth, pathologists employ antigen targeting stains to the tissue in question to confirm the diagnosis. For example, a nonneoplastic feature of usual ductal hyperplasia will display diffuse staining for cytokeratin 5 (CK5) and no diffuse staining for estrogen receptor (ER), while a cancerous growth of ductal carcinoma in situ will have negative or focally positive staining for CK5 and diffuse staining for ER [9]. Many tissue samples contain cancerous and non-cancerous features with morphological overlaps that cause variability between annotators. The informative fields IHC slides provide could play an integral role in machine model pathology diagnostics. Following the revisions made on all the annotations, a second experiment was run using ResNet18. Compared to the pilot study, an increase of model prediction accuracy was seen for the labels indc, infl, nneo, norm, and null. This increase is correlated with an increase in annotated area and annotation accuracy. Model performance in identifying the suspicious label decreased by 25% due to the decrease of 57% in the total annotated area described by this label. A summary of the model performance is given in Table 4, which shows the new prediction accuracy and the absolute change in error rate compared to Table 3. The breast tissue subset we are developing includes 3,505 annotated breast pathology slides from 296 patients. The average size of a scanned SVS file is 363 MB. The annotations are stored in an XML format. A CSV version of the annotation file is also available which provides a flat, or simple, annotation that is easy for machine learning researchers to access and interface to their systems. Each patient is identified by an anonymized medical reference number. Within each patient’s directory, one or more sessions are identified, also anonymized to the first of the month in which the sample was taken. These sessions are broken into groupings of tissue taken on that date (in this case, breast tissue). A deidentified patient report stored as a flat text file is also available. Within these slides there are a total of 16,971 total annotated regions with an average of 4.84 annotations per slide. Among those annotations, 8,035 are non-cancerous (normal, background, null, and artifact,) 6,222 are carcinogenic signs (inflammation, nonneoplastic and suspicious,) and 2,714 are cancerous labels (ductal carcinoma in situ and invasive ductal carcinoma in situ.) The individual patients are split up into three sets: train, development, and evaluation. Of the 74 cancerous patients, 20 were allotted for both the development and evaluation sets, while the remain 34 were allotted for train. The remaining 222 patients were split up to preserve the overall distribution of labels within the corpus. This was done in hope of creating control sets for comparable studies. Overall, the development and evaluation sets each have 80 patients, while the training set has 136 patients. In a related component of this project, slides from the Fox Chase Cancer Center (FCCC) Biosample Repository (https://www.foxchase.org/research/facilities/genetic-research-facilities/biosample-repository -facility) are being digitized in addition to slides provided by Temple University Hospital. This data includes 18 different types of tissue including approximately 38.5% urinary tissue and 16.5% gynecological tissue. These slides and the metadata provided with them are already anonymized and include diagnoses in a spreadsheet with sample and patient ID. We plan to release over 13,000 unannotated slides from the FCCC Corpus simultaneously with v1.0.0 of TUDP. Details of this release will also be discussed in this poster. Few digitally annotated databases of pathology samples like TUDP exist due to the extensive data collection and processing required. The breast corpus subset should be released by November 2021. By December 2021 we should also release the unannotated FCCC data. We are currently annotating urinary tract data as well. We expect to release about 5,600 processed TUH slides in this subset. We have an additional 53,000 unprocessed TUH slides digitized. Corpora of this size will stimulate the development of a new generation of deep learning technology. In clinical settings where resources are limited, an assistive diagnoses model could support pathologists’ workload and even help prioritize suspected cancerous cases. ACKNOWLEDGMENTS This material is supported by the National Science Foundation under grants nos. CNS-1726188 and 1925494. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] N. Shawki et al., “The Temple University Digital Pathology Corpus,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York City, New York, USA: Springer, 2020, pp. 67 104. https://www.springer.com/gp/book/9783030368432. [2] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning.” Major Research Instrumentation (MRI), Division of Computer and Network Systems, Award No. 1726188, January 1, 2018 – December 31, 2021. https://www. isip.piconepress.com/projects/nsf_dpath/. [3] A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2020, pp. 5036-5040. https://doi.org/10.21437/interspeech.2020-3015. [4] C.-J. Wu et al., “Machine Learning at Facebook: Understanding Inference at the Edge,” in Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA), 2019, pp. 331–344. https://ieeexplore.ieee.org/document/8675201. [5] I. Caswell and B. Liang, “Recent Advances in Google Translate,” Google AI Blog: The latest from Google Research, 2020. [Online]. Available: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html. [Accessed: 01-Aug-2021]. [6] V. Khalkhali, N. Shawki, V. Shah, M. Golmohammadi, I. Obeid, and J. Picone, “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2021, pp. 1 7. https://www.isip. piconepress.com/publications/conference_proceedings/2021/ieee_spmb/eeg_transfer_learning/. [7] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/nsf/mri_dpath/. [8] I. Hunt, S. Husain, J. Simons, I. Obeid, and J. Picone, “Recent Advances in the Temple University Digital Pathology Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2019, pp. 1–4. https://ieeexplore.ieee.org/document/9037859. [9] A. P. Martinez, C. Cohen, K. Z. Hanley, and X. (Bill) Li, “Estrogen Receptor and Cytokeratin 5 Are Reliable Markers to Separate Usual Ductal Hyperplasia From Atypical Ductal Hyperplasia and Low-Grade Ductal Carcinoma In Situ,” Arch. Pathol. Lab. Med., vol. 140, no. 7, pp. 686–689, Apr. 2016. https://doi.org/10.5858/arpa.2015-0238-OA. 
    more » « less
  3. Research has rarely examined how the COVID-19 pandemic may affect teens’ social media engagement and psychological wellbeing, and even less research has compared the difference between teens with and without mental health concerns. We collected and analyzed weekly data from January to December 2020 from teens in four Reddit communities (subreddits), including teens in r/Teenagers and teens who participated in three mental health subreddits (r/Depression, r/Anxiety, and r/SuicideWatch). The results showed that teens’ weekly subreddit participation, posting/commenting frequency, and emotion expression were related to significant pandemic events. Teen Redditors on r/Teenagers had a higher posting/commenting frequency but lower negative emotion than teen Redditors on the three mental health subreddits. When comparing posts/comments on r/Teenagers, teens who ever visited one of the three mental health subreddits posted/commented twice as frequently as teens who did not, but their emotion expression was similar. The results from the Interrupted Time Series Analysis (ITSA) indicated that both teens with and without mental health concerns reversed the trend in posting frequency and negative emotion from declining to increasing right after the pandemic outbreak, and teens with mental health concerns had a more rapidly increasing trend in posting/commenting. The findings suggest that teens’ social media engagement and emotion expression reflect the pandemic evolution. Teens with mental health concerns are more likely to reveal their emotions on specialized mental health subreddits rather than on the general r/Teenagers subreddit. In addition, the findings indicated that teens with mental health concerns had a strong social interaction desire that various barriers in the real world may inhibit. The findings call for more attention to understand the pandemic’s influence on teens by monitoring and analyzing social media data and offering adequate support to teens regarding their mental health wellbeing. 
    more » « less
  4. Abstract Quasi-vertical profiles (QVPs) of polarimetric radar data have emerged as a powerful tool for studying precipitation microphysics. Various studies have found enhancements in specific differential phase K dp in regions of suspected secondary ice production (SIP) due to rime splintering. Similar K dp enhancements have also been found in regions of sublimating snow, another proposed SIP process. This work explores these K dp signatures for two cases of sublimating snow using nearly collocated S- and Ka-band radars. The presence of the signature was inconsistent between the radars, prompting exploration of alternative causes. Idealized simulations are performed using a radar beam-broadening model to explore the impact of nonuniform beam filling (NBF) on the observed reflectivity Z and K dp within the sublimation layer. Rather than an intrinsic increase in ice concentration, the observed K dp enhancements can instead be explained by NBF in the presence of sharp vertical gradients of Z and K dp within the sublimation zone, which results in a K dp bias dipole. The severity of the bias is sensitive to the Z gradient and radar beamwidth and elevation angle, which explains its appearance at only one radar. In addition, differences in scanning strategies and range thresholds during QVP processing can constructively enhance these positive K dp biases by excluding the negative portion of the dipole. These results highlight the need to consider NBF effects in regions not traditionally considered (e.g., in pure snow) due to the increased K dp fidelity afforded by QVPs and the subsequent ramifications this has on the observability of sublimational SIP. Significance Statement Many different processes can cause snowflakes to break apart into numerous tiny pieces, including when they evaporate into dry air. Purported evidence of this phenomenon has been seen in data from some weather radars, but we noticed it was not seen in data from others. In this work we use case studies and models to show that this signature may actually be an artifact from the radar beam becoming too big and there being too much variability of the precipitation within it. While this breakup process may actually be occurring in reality, these results suggest we may have trouble observing it with typical weather radars. 
    more » « less
  5. Abstract

    When forecasts for a major weather event begin days in advance, updates may be more accurate but inconsistent with the original forecast. Evidence suggests that resulting inconsistency may reduce user trust. However, adding an uncertainty estimate to the forecast may attenuate any loss of trust due to forecast inconsistency, as has been shown with forecast inaccuracy. To evaluate this hypothesis, this experiment tested the impact on trust of adding probabilistic snow-accumulation forecasts to single-value forecasts in a series of original and revised forecast pairs (based on historical records) that varied in both consistency and accuracy. Participants rated their trust in the forecasts and used them to make school-closure decisions. One-half of the participants received single-value forecasts, and one-half also received the probability of 6 in. or more (decision threshold in the assigned task). As with previous research, forecast inaccuracy was detrimental to trust, although probabilistic forecasts attenuated the effect. Moreover, the inclusion of probabilistic forecasts allowed participants to make economically better decisions. Surprisingly, in this study inconsistency increased rather than decreased trust, perhaps because it alerted participants to uncertainty and led them to make more cautious decisions. Furthermore, the positive effect of inconsistency on trust was enhanced by the inclusion of probabilistic forecast. This work has important implications for practical settings, suggesting that both probabilistic forecasts and forecast inconsistency provide useful information to decision-makers. Therefore, members of the public may benefit from well-calibrated uncertainty estimates and newer, more reliable information.

    Significance Statement

    The purpose of this study was to clarify how explicit uncertainty information and forecast inconsistency impact trust and decision-making in the context of sequential forecasts from the same source. This is important because trust is critical for effective risk communication. In the absence of trust, people may not use available information and subsequently may put themselves and others at greater-than necessary risk. Our results suggest that updating forecasts when newer, more reliable information is available and providing reliable uncertainty estimates can support user trust and decision-making.

     
    more » « less