Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2028
-
Free, publicly-accessible full text available October 6, 2027
-
Free, publicly-accessible full text available May 1, 2027
-
Navigating dilemmas involving conflicting values is challenging even for humans in high-stakes domains, let alone for AI, yet prior work has been limited to everyday scenarios. To close this gap, we introduce CLASH (Character perspective-based LLM Assessments in Situations with High-stakes), a meticulously curated dataset consisting of 345 high-impact dilemmas along with 3,795 individual perspectives of diverse values. CLASH enables the study of critical yet underexplored aspects of value-based decision-making processes, including understanding of decision ambivalence and psychological discomfort as well as capturing the temporal shifts of values in the perspectives of characters. By benchmarking 14 non-thinking and thinking models, we uncover several key findings. (1) Even strong proprietary models, such as GPT-5 and Claude-4-Sonnet, struggle with ambivalent decisions, achieving only 24.06 and 51.01 accuracy. (2) Although LLMs reasonably predict psychological discomfort, they do not adequately comprehend perspectives involving value shifts. (3) Cognitive behaviors that are effective in the math-solving and game strategy domains do not transfer to value reasoning. Instead, new failure patterns emerge, including early commitment and overcommitment. (4) The steerability of LLMs towards a given value is significantly correlated with their value preferences. (5) Finally, LLMs exhibit greater steerability when reasoning from a third-party perspective, although certain values (e.g., safety) benefit uniquely from first-person framing.more » « lessFree, publicly-accessible full text available January 1, 2027
-
Lu et al. (2025) proved that HDRI camera sensors from different viewpoints can capture consistent and transferable luminance patterns in daylit spaces through Conditional Generative Adversarial Networks (CGANs). Building on that, this paper validates that non-intrusive luminance monitoring can be used to evaluate daylighting preferences, using collected experimental datasets with human subjects at different seating locations in a real open-plan office. To apply paired comparisons for effective learning, subjects compared successive pairs of different visual conditions and indicated their visual preferences through online surveys. Meanwhile, ten small, low-cost, and calibrated cameras captured luminance maps from both the field of view (FOV) of each occupant and non-intrusive viewpoints (on computer monitors, luminaire/ceiling and desk) under various sky conditions and interior luminance distributions. Convolutional Neural Network (CNN) models were developed and trained on luminance similarity index maps (generated from pixel-wise comparisons between successive luminance maps captured from FOV and non-intrusive cameras separately), to classify each subject’s daylight visual preferences. The results showed that the models trained on luminance distributions measured by monitor-mounted and ceiling-mounted cameras produced preference predictions consistent with those derived from FOV cameras, and can reliably learn visual preferences (83-94% accuracy) in all cases except for locations furthest from the windows. Overall, this study is the first to demonstrate that daylight preferences can be learned non-invasively by employing the full potential of HDRI and deep learning techniques, marking a significant milestone toward practical, AI-assisted, human-centered daylighting operation.more » « lessFree, publicly-accessible full text available January 1, 2027
-
Luminance monitoring within the field of view (FOV) is required for assessing visual comfort and overall visual preferences, but it is practically challenging and intrusive. As a result, real-time, human-centered daylighting operation remains a challenge. This paper presents a novel deep-learning based framework method to demonstrate that meaningful features in the occupant’s visual field can be extracted without invasive measurements. It is the first proof of concept to show that it is feasible to monitor luminance distributions as perceived by people, using a non-intrusive camera integrated with deep learning neural networks. A Conditional Generative Adversarial Network (CGAN), pix2pix is used to transfer information from non-intrusive images to FOV images. Two datasets were collected in an open-plan office with compact, low-cost High Dynamic Range Image (HDRI) cameras installed at two alternate locations (a wall or a monitor), to separately train two pix2pix models with the same target FOV images. The results show that the generated FOV images closely resemble the measured FOV images in terms of pixelwise luminance errors, mean luminance, and structural similarity. The main errors are due to bright scenes, visible through windows, confined to a very limited number of pixels. Overall, this work establishes a basis for future studies to assess the effect of visual environment on human perception using non-intrusive measurements. It also provides the theoretical foundation for a connected paper (Lu et al., 2025), which demonstrates that non-intrusive measurements and deep learning techniques can be used to discover daylight preferences and enable AI-assisted daylighting operation.more » « lessFree, publicly-accessible full text available January 1, 2027
-
Free, publicly-accessible full text available December 1, 2026
-
Free, publicly-accessible full text available December 2, 2026
-
This survey explores the transformative impact of foundation models (FMs) in artificial intelligence, focusing on their integration with federated learning (FL) in biomedical research. Foundation models such as ChatGPT, LLaMa, and CLIP, which are trained on vast datasets through methods including unsupervised pretraining, self-supervised learning, instructed fine-tuning, and reinforcement learning from human feedback, represent significant advancements in machine learning. These models, with their ability to generate coherent text and realistic images, are crucial for biomedical applications that require processing diverse data forms such as clinical reports, diagnostic images, and multimodal patient interactions. The incorporation of FL with these sophisticated models presents a promising strategy to harness their analytical power while safeguarding the privacy of sensitive medical data. This approach not only enhances the capabilities of FMs in medical diagnostics and personalized treatment but also addresses critical concerns about data privacy and security in healthcare. This survey reviews the current applications of FMs in federated settings, underscores the challenges, and identifies future research directions including scaling FMs, managing data diversity, and enhancing communication efficiency within FL frameworks. The objective is to encourage further research into the combined potential of FMs and FL, laying the groundwork for healthcare innovations.more » « lessFree, publicly-accessible full text available December 1, 2026
-
Federated learning (FL)-based object detection systems provide many advantages, such as efficiency and privacy. However, performance degradation due to the data heterogeneity issue remains a critical yet often overlooked challenge in recent FL research. In this paper, we address the data heterogeneity issue by introducing model contrastive loss, which significantly improves performance compared to baseline methods. In addition, focal loss is applied to further enhance the prediction accuracy on minority-class objects. Experimental results demonstrate the effectiveness of the proposed federated training framework, achieving approximately 20% improvement in mean average precision over the baseline FedAvg. Furthermore, extensive ablation studies on different hyperparameters in the model contrastive loss are conducted, providing deeper insights into the impact of parameter selection.more » « lessFree, publicly-accessible full text available November 1, 2026
An official website of the United States government
