skip to main content


This content will become publicly available on July 11, 2025

Title: Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Award ID(s):
2016727
PAR ID:
10544448
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Proceedings of Machine Learning Research
Date Published:
ISSN:
2640-3498
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Studies of voluntary visual spatial attention have used attention-directing cues, such as arrows, to induce or instruct observers to focus selective attention on relevant locations in visual space to detect or discriminate subsequent target stimuli. In everyday vision, however, voluntary attention is influenced by a host of factors, most of which are quite different from the laboratory paradigms that use attention-directing cues. These factors include priming, experience, reward, meaning, motivations, and high-level behavioral goals. Attention that is endogenously directed in the absence of external attention-directing cues has been referred to as “self-initiated attention” or, as in our prior work, as “willed attention” where volunteers decide where to attend in response to a prompt to do so. Here, we used a novel paradigm that eliminated external influences (i.e., attention-directing cues and prompts) about where and/or when spatial attention should be directed. Using machine learning decoding methods, we showed that the well known lateralization of EEG alpha power during spatial attention was also present during purely self-generated attention. By eliminating explicit cues or prompts that affect the allocation of voluntary attention, this work advances our understanding of the neural correlates of attentional control and provides steps toward the development of EEG-based brain–computer interfaces that tap into human intentions.

     
    more » « less