Brain-Computer interfaces (BCIs) are typically designed to be lightweight and responsive in real-time to provide users timely feedback. Classical feature engineering is computationally efficient but has low accuracy, whereas the recent neural networks (DNNs) improve accuracy but are computationally expensive and incur high latency. As a promising alternative, the low-dimensional computing (LDC) classifier based on vector symbolic architecture (VSA), achieves small model size yet higher accuracy than classical feature engineering methods. However, its accuracy still lags behind that of modern DNNs, making it challenging to process complex brain signals. To improve the accuracy of a small model, knowledge distillation is a popular method. However, maintaining a constant level of distillation between the teacher and student models may not be the best way for a growing student during its progressive learning stages. In this work, we propose a simple scheduled knowledge distillation method based on curriculum data order to enable the student to gradually build knowledge from the teacher model, controlled by an scheduler. Meanwhile, we employ the LDC/VSA as the student model to enhance the on-device inference efficiency for tiny BCI devices that demand low latency. The empirical results have demonstrated that our approach achieves better tradeoff between accuracy and hardware efficiency compared to other methods.
more »
« less
This content will become publicly available on June 22, 2026
Holistic Design towards Resource-Stringent Binary Vector Symbolic Architecture
Classification tasks on ultra-lightweight devices demand devices that are resource-constrained and deliver swift responses. Binary Vector Symbolic Architecture (VSA) is a promising approach due to its minimal memory requirements and fast execution times compared to traditional machine learning (ML) methods. Nonetheless, binary VSA's practicality is limited by its inferior inference performance and a design that prioritizes algorithmic over hardware optimization. This paper introduces UniVSA, a co-optimized binary VSA framework for both algorithm and hardware. UniVSA not only significantly enhances inference accuracy beyond current state-of-the-art binary VSA models but also reduces memory footprints. It incorporates novel, lightweight modules and design flow tailored for optimal hardware performance. Experimental results show that UniVSA surpasses traditional ML methods in terms of performance on resource-limited devices, achieving smaller memory usage, lower latency, reduced resource demand, and decreased power consumption.
more »
« less
- Award ID(s):
- 2326598
- PAR ID:
- 10618528
- Publisher / Repository:
- Design Automation Conference (DAC)
- Date Published:
- Format(s):
- Medium: X
- Location:
- San Francisco, CA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Brain-Computer interfaces (BCIs) are typically designed to be lightweight and responsive in real-time to provide users timely feedback. Classical feature engineering is computationally efficient but has low accuracy, whereas the recent neural networks (DNNs) improve accuracy but are computationally expensive and incur high latency. As a promising alternative, the low-dimensional computing (LDC) classifier based on vector symbolic architecture (VSA), achieves small model size yet higher accuracy than classical feature engineering methods. However, its accuracy still lags behind that of modern DNNs, making it challenging to process complex brain signals. To improve the accuracy of a small model, knowledge distillation is a popular method. However, maintaining a constant level of distillation between the teacher and student models may not be the best way for a growing student during its progressive learning stages. In this work, we propose a simple scheduled knowledge distillation method based on curriculum data order to enable the student to gradually build knowledge from the teacher model, controlled by an alpha scheduler. Meanwhile, we employ the LDC/VSA as the student model to enhance the on-device inference efficiency for tiny BCI devices that demand low latency. The empirical results have demonstrated that our approach achieves better tradeoff between accuracy and hardware efficiency compared to other methods.more » « less
-
Precise seizure identification plays a vital role in understanding cortical connectivity and informing treatment decisions. Yet, the manual diagnostic methods for epileptic seizures are both labor-intensive and highly specialized. In this study, we propose a Hyperdimensional Computing (HDC) classifier for accurate and efficient multi-type seizure classification. Despite previous seizure analysis efforts using HDC being limited to binary detection (seizure or no seizure), our work breaks new ground by utilizing HDC to classify seizures into multiple distinct types. HDC offers significant advantages, such as lower memory requirements, a reduced hardware footprint for wearable devices, and decreased computational complexity. Due to these attributes, HDC can be an alternative to traditional machine learning methods, making it a practical and efficient solution, particularly in resource-limited scenarios or applications involving wearable devices. We evaluated the proposed technique on the latest version of TUH EEG Seizure Corpus (TUSZ) dataset and the evaluation result demonstrate noteworthy performance, achieving a weighted F1 score of 94.6%. This outcome is in line with, or even exceeds, the performance achieved by the state-ofthe-art traditional machine learning methods.more » « less
-
Current state-of-the-art resource management systems leverage Machine Learning (ML) methods to enable the efficient use of heterogeneous memory hardware, deployed across emerging computing platforms. While machine intelligence can be effectively used to learn and predict complex data access patterns of modern analytics, the use of ML over the exploded data sizes and memory footprints is prohibitive for its practical system-level integration. For this reason, recent solutions use existing lightweight historical information to predict the access behavior of majority of the application pages, and train ML models over a small page subset. To maximize application performance improvements, the pages selected for machine learning-based management are identified with elaborate page selection methods. These methods involve the calculation of detailed performance estimates depending on the configuration of the hybrid memory platform. This paper aims to reduce such vast operational overheads, that further exacerbate the existing high overheads of using machine intelligence, in return for high performance and efficiency. To this end, we build Cronus, an image-based pipeline for selecting pages for ML-based management. We visualize memory access patterns and reveal spatial and temporal correlations among the selected pages, that current methods fail to leverage. We then use the created images to detect patterns and select page groups for machine learning model deployment. Cronus drastically reduces the operational costs, while preserving the effectiveness of the page selection and achieved performance of machine intelligent hybrid memory management. This work makes a case that visualization and computer vision methods can unlock new insights and reduce the operational complexity of emerging systems solutions.more » « less
-
The rapid proliferation of resource-constrained IoT devices across sectors like healthcare, industrial automation, and finance introduces major security challenges. Traditional digital signatures, though foundational for authentication, are often infeasible for low-end devices with limited computational, memory, and energy resources. Also, the rise of quantum computing necessitates post-quantum (PQ) secure alternatives. However, NIST-standardized PQ signatures impose substantial overhead, limiting their practicality in energy-sensitive applications such as wearables, where signer-side efficiency is critical. To address these challenges, we present LightQSign (LiteQS), a novel lightweight PQ signature that achieves near-optimal signature generation efficiency with only a small, constant number of hash operations per signing. Its core innovation enables verifiers to obtain one-time hash-based public keys without interacting with signers or third parties through secure computation. We formally prove the security of LiteQS in the random oracle model and evaluate its performance on commodity hardware and a resource-constrained 8-bit AtMega128A1 microcontroller. Experimental results show that LiteQS outperforms NIST PQ standards with lower computational overhead, minimal memory usage, and compact signatures. On an 8-bit microcontroller, it achieves up to 1.5–24×higher energy efficiency and 1.7–22×shorter signatures than PQ counterparts, and 56–76×better energy efficiency than conventional standards–enabling longer device lifespans and scalable, quantum-resilient authentication.more » « less
An official website of the United States government
