skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Affective Computing Model with Impulse Control in Internet of Things based on Affective Robotics
Award ID(s):
1838024 1950485
PAR ID:
10376921
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
IEEE Internet of Things Journal
ISSN:
2372-2541
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Geographers have been central to identifying and exploring the shifting spatialities of border enforcement and how different enforcement strategies alter the geography of state sovereignty. Migration-related public information campaigns (PICs) are one strategy that has received increasing attention from geographers and social scientists more broadly in recent years. While existing research examines the sites and spaces where PICs are distributed, as well as the affective content of their messaging, little research has examined the development of campaigns and the transnational connections that enable their deployment. This article draws on work in the fields of carceral circuitry and transnational enforcement networks in order to expand our understanding of affective governmentality as a transnational strategy of border governance. Based on data collected as part of a large-scale comparative study of the use of PICs by the US and Australian governments, we argue that this form of affective governmentality relies upon transnational circuits through which people, money, and knowledge move to enable the development and circulation of affective messaging. In doing so, we develop the concept of transnational affective circuitry to refer to the often contingent, temporary relations and connections that enable PICs to operate as a form of transnational affective governmentality aimed at hindering unauthorized migration. Our analysis illustrates the transnational connections that enable increasingly expansive and creative forms of border enforcement to emerge while also expanding the scope of examinations of affective governmentality to attend to the relations that undergird and enable this form of transnational governance. 
    more » « less
  2. While contextualized word representations have improved state-of-the-art benchmarks in many NLP tasks, their potential usefulness for social-oriented tasks remains largely unexplored. We show how contextualized word embeddings can be used to capture affect dimensions in portrayals of people. We evaluate our methodology quantitatively, on held-out affect lexicons, and qualitatively, through case examples. We find that contextualized word representations do encode meaningful affect information, but they are heavily biased towards their training data, which limits their usefulness to in-domain analyses. We ultimately use our method to examine differences in portrayals of men and women. 
    more » « less
  3. Humans routinely extract important information from images and videos, relying on their gaze. In contrast, computational systems still have difficulty annotating important visual information in a human-like manner, in part because human gaze is often not included in the modeling process. Human input is also particularly relevant for processing and interpreting affective visual information. To address this challenge, we captured human gaze, spoken language, and facial expressions simultaneously in an experiment with visual stimuli characterized by subjective and affective content. Observers described the content of complex emotional images and videos depicting positive and negative scenarios and also their feelings about the imagery being viewed. We explore patterns of these modalities, for example by comparing the affective nature of participant-elicited linguistic tokens with image valence. Additionally, we expand a framework for generating automatic alignments between the gaze and spoken language modalities for visual annotation of images. Multimodal alignment is challenging due to their varying temporal offset. We explore alignment robustness when images have affective content and whether image valence influences alignment results. We also study if word frequency-based filtering impacts results, with both the unfiltered and filtered scenarios performing better than baseline comparisons, and with filtering resulting in a substantial decrease in alignment error rate. We provide visualizations of the resulting annotations from multimodal alignment. This work has implications for areas such as image understanding, media accessibility, and multimodal data fusion. 
    more » « less