skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 23, 2026

Title: Towards Certified Safe Personalization in Learning Enabled Human-in-the-loop Human-in-the-plant Systems
The paper presents AIIM, an Artificial Intelligence (AI) enabled personalIzation Management software for human-in-the-loop, human-in-the-plant Learning enabled systems (LES). AIIM can be integrated with LES software to aid a human user to achieve safe and effective operation under dynamically changing contexts. AIIM consists of: A) an AI technique to derive model coefficient of a physics guided surrogate model from operational data shared following privacy norms, and b) continuous model conformance to identify key changes in LES operational behavior that may jeopardize safety. We demonstrate two capabilities of AIIM, personalization and unknown error detection, through case studies that span a significant breadth of dynamic context change scenarios including: a) involuntary change in user context such as medication induced glucose metabolism change in automated insulin delivery (AID), b) actuation failure such as cartridge blockage in AID, c) latent sensor error in aviation, and d) unknown coding error in autonomous car software patches. We compare AIIM personalization with human-in-the-loop and self-adaptive model-predictive control design on real-life and simulation settings, to show safe and improved diabetes management.  more » « less
Award ID(s):
2436801
PAR ID:
10650901
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
ACM Transactions on Emerging Technologies in Computing Systems
Date Published:
Journal Name:
ACM Journal on Emerging Technologies in Computing Systems
ISSN:
1550-4832
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Simulation-based learning has become a cornerstone of healthcare education, fostering essential skills like communication, teamwork or decision-making in safe, controlled environments. However, participants’ reflection on simulations often rely on subjective recollections, limiting their effectiveness in promoting learning. This symposium explores how multimodal analytics and AI can enhance simulation-based education by automating teamwork analysis data, providing structured feedback, and supporting reflective practices. The papers examine real-time analytics for closed-loop communication in cardiac arrest simulations, multimodal data use to refine feedback in ICU nursing simulations, generative AI-powered chatbots facilitating nursing students' interpretation of multimodal learning analytics dashboards, and culturally sensitive, AI-based scenarios for Breaking Bad News in an Indian context. Collectively, these contributions highlight the transformative potential of using data and AI-enhanced solutions, emphasizing personalization, cultural sensitivity, and human-centered design, and invite dialogue on the pedagogical, technological and ethical implications of introducing data-based practices and AI-based tools in medical education. 
    more » « less
  2. The advancement in deep learning and internet-of-things have led to diverse human sensing applications. However, distinct patterns in human sensing, influenced by various factors or contexts, challenge the generic neural network model's performance due to natural distribution shifts. To address this, personalization tailors models to individual users. Yet most personalization studies overlook intra-user heterogeneity across contexts in sensory data, limiting intra-user generalizability. This limitation is especially critical in clinical applications, where limited data availability hampers both generalizability and personalization. Notably, intra-user sensing attributes are expected to change due to external factors such as treatment progression, further complicating the challenges. To address the intra-user generalization challenge, this work introduces CRoP, a novel static personalization approach. CRoP leverages off-the-shelf pre-trained models as generic starting points and captures user-specific traits through adaptive pruning on a minimal sub-network while allowing generic knowledge to be incorporated in remaining parameters. CRoP demonstrates superior personalization effectiveness and intra-user robustness across four human-sensing datasets, including two from real-world health domains, underscoring its practical and social impact. Additionally, to support CRoP's generalization ability and design choices, we provide empirical justification through gradient inner product analysis, ablation studies, and comparisons against state-of-the-art baselines. 
    more » « less
  3. An overarching goal of Artificial Intelligence (AI) is creating autonomous, social agents that help people. Two important challenges, though, are that different people prefer different assistance from agents and that preferences can change over time. Thus, helping behaviors should be tailored to how an individual feels during the interaction. We hypothesize that human nonverbal behavior can give clues about users' preferences for an agent's helping behaviors, augmenting an agent's ability to computationally predict such preferences with machine learning models. To investigate our hypothesis, we collected data from 194 participants via an online survey in which participants were recorded while playing a multiplayer game. We evaluated whether the inclusion of nonverbal human signals, as well as additional context (e.g., via game or personality information), led to improved prediction of user preferences between agent behaviors compared to explicitly provided survey responses. Our results suggest that nonverbal communication -- a common type of human implicit feedback -- can aid in understanding how people want computational agents to interact with them. 
    more » « less
  4. With the rise of AI, algorithms have become better at learning underlying patterns from the training data including ingrained social biases based on gender, race, etc. Deployment of such algorithms to domains such as hiring, healthcare, law enforcement, etc. has raised serious concerns about fairness, accountability, trust and interpretability in machine learning algorithms. To alleviate this problem, we propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases from tabular datasets. It uses a graphical causal model to represent causal relationships among different features in the dataset and as a medium to inject domain knowledge. A user can detect the presence of bias against a group, say females, or a subgroup, say black females, by identifying unfair causal relationships in the causal network and using an array of fairness metrics. Thereafter, the user can mitigate bias by refining the causal model and acting on the unfair causal edges. For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset based on the current causal model while ensuring a minimal change from the original dataset. Users can visually assess the impact of their interactions on different fairness metrics, utility metrics, data distortion, and the underlying data distribution. Once satisfied, they can download the debiased dataset and use it for any downstream application for fairer predictions. We evaluate D-BIAS by conducting experiments on 3 datasets and also a formal user study. We found that D-BIAS helps reduce bias significantly compared to the baseline debiasing approach across different fairness metrics while incurring little data distortion and a small loss in utility. Moreover, our human-in-the-loop based approach significantly outperforms an automated approach on trust, interpretability and accountability. 
    more » « less
  5. AI-enabled decision-support systems aim to help medical providers rapidly make decisions with limited information during medical emergencies. A critical challenge in developing these systems is supporting providers in interpreting the system output to make optimal treatment decisions. In this study, we designed and evaluated an AI-enabled decision-support system to aid providers in treating patients with traumatic injuries. We first conducted user research with physicians to identify and design information types and AI outputs for a decision-support display. We then conducted an online experiment with 35 medical providers from six health systems to evaluate two human-AI interaction strategies: (1) AI information synthesis and (2) AI information and recommendations. We found that providers were more likely to make correct decisions when AI information and recommendations were provided compared to receiving no AI support. We also identified two socio-technical barriers to providing AI recommendations during time-critical medical events: (1) an accuracy-time trade-off in providing recommendations and (2) polarizing perceptions of recommendations between providers. We discuss three implications for developing AI-enabled decision support used in time-critical events, contributing to the limited research on human-AI interaction in this context. 
    more » « less