skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Using Augmented Reality to Better Study Human-Robot Interaction
In the field of Human-Robot Interaction, researchers often techniques such as the Wizard-of-Oz paradigms in order to better study narrow scientific questions while carefully controlling robots’ capabilities unrelated to those questions, especially when those other capabilities are not yet easy to automate. However, those techniques often impose limitations on the type of collaborative tasks that can be used, and the perceived realism of those tasks and the task context. In this paper, we discuss how Augmented Reality can be used to address these concerns while increasing researchers’ level of experimental control, and discuss both advantages and disadvantages of this approach  more » « less
Award ID(s):
1909864
PAR ID:
10155297
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
HCII Conference on Virtual, Augmented, and Mixed Reality
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For more than four decades, researchers have used meta‐analyses to synthesize data from multiple experimental studies often to draw conclusions that are not supported by individual studies. More recently, single‐case experimental design (SCED) researchers have adopted meta‐analysis techniques to answer research questions with data gleaned from SCED experiments. Meta‐analyses enable researchers to answer questions regarding intervention efficacy, generality, and condition boundaries. Here we discuss meta‐analysis techniques, the rationale for their adaptation with SCED studies, and current indices used to quantify the effect of SCED data in applied behavior analysis. 
    more » « less
  2. Social media discourse involves people from different backgrounds, beliefs, and motives. Thus, often such discourse can devolve into toxic interactions. Generative Models, such as Llama and ChatGPT, have recently exploded in popularity due to their capabilities in zero-shot question-answering. Because these models are increasingly being used to ask questions of social significance, a crucial research question is whether they can understand social media dynamics. This work provides a critical analysis regarding generative LLM’s ability to understand language and dynamics in social contexts, particularly considering cyberbullying and anti-cyberbullying (posts aimed at reducing cyberbullying) interactions. Specifically, we compare and contrast the capabilities of different large language models (LLMs) to understand three key aspects of social dynamics: language, directionality, and the occurrence of bullying/anti-bullying messages. We found that while fine-tuned LLMs exhibit promising results in some social media understanding tasks (understanding directionality), they presented mixed results in others (proper paraphrasing and bullying/anti-bullying detection). We also found that fine-tuning and prompt engineering mechanisms can have positive effects in some tasks. We believe that a understanding of LLM’s capabilities is crucial to design future models that can be effectively used in social applications. 
    more » « less
  3. null (Ed.)
    For several years, the software engineering research community used eye trackers to study program comprehension, bug localization, pair programming, and other software engineering tasks. Eye trackers provide researchers with insights on software engineers’ cognitive processes, data that can augment those acquired through other means, such as on-line surveys and questionnaires. While there are many ways to take advantage of eye trackers, advancing their use requires defining standards for experimental design, execution, and reporting. We begin by presenting the foundations of eye tracking to provide context and perspective. Based on previous surveys of eye tracking for programming and software engineering tasks and our collective, extensive experience with eye trackers, we discuss when and why researchers should use eye trackers as well as how they should use them. We compile a list of typical use cases—real and anticipated—of eye trackers, as well as metrics, visualizations, and statistical analyses to analyze and report eye-tracking data. We also discuss the pragmatics of eye tracking studies. Finally, we offer lessons learned about using eye trackers to study software engineering tasks. This paper is intended to be a one-stop resource for researchers interested in designing, executing, and reporting eye tracking studies of software engineering tasks. 
    more » « less
  4. This tutorial targets researchers and practitioners who are interested in AI and ML technologies for structural information extraction (IE) from unstructured textual sources. Particularly, this tutorial will provide audience with a systematic introduction to recent advances of IE, by answering several important research questions. These questions include (i) how to develop an robust IE system from noisy, insufficient training data, while ensuring the reliability of its prediction? (ii) how to foster the generalizability of IE through enhancing the system’s cross-lingual, cross-domain, cross-task and cross-modal transferability? (iii) how to precisely support extracting structural information with extremely fine-grained, diverse and boundless labels? (iv) how to further improve IE by leveraging indirect supervision from other NLP tasks, such as NLI, QA or summarization, and pre-trained language models? (v) how to acquire knowledge to guide the inference of IE systems? We will discuss several lines of frontier research that tackle those challenges, and will conclude the tutorial by outlining directions for further investigation. 
    more » « less
  5. By enabling autonomous vehicles (AVs) to share data while driving, 5G vehicular communications allow AVs to collaborate on solving common autonomous driving tasks. AVs often rely on machine learning models to perform such tasks; as such, collaboration requires leveraging vehicular communications to improve the performance of machine learning algorithms. This paper provides a comprehensive literature survey of the intersection between machine learning for autonomous driving and vehicular communications. Throughout the paper, we explain how vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) communications are used to improve machine learning in AVs, answering five major questions regarding such systems. These questions include: 1) How can AVs effectively transmit data wirelessly on the road? 2) How do AVs manage the shared data? 3) How do AVs use shared data to improve their perception of the environment? 4) How do AVs use shared data to drive more safely and efficiently? and 5) How can AVs protect the privacy of shared data and prevent cyberattacks? We also summarize data sources that may support research in this area and discuss the future research potential surrounding these five questions. 
    more » « less